00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4085 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3675 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.153 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.274 > git --version # 'git version 2.39.2' 00:00:00.274 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.325 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.325 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.022 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.034 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.045 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.045 > git config core.sparsecheckout # timeout=10 00:00:06.057 > git read-tree -mu HEAD # timeout=10 00:00:06.072 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.095 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.095 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.168 [Pipeline] Start of Pipeline 00:00:06.182 [Pipeline] library 00:00:06.183 Loading library shm_lib@master 00:00:06.183 Library shm_lib@master is cached. Copying from home. 00:00:06.198 [Pipeline] node 00:00:06.210 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.212 [Pipeline] { 00:00:06.221 [Pipeline] catchError 00:00:06.222 [Pipeline] { 00:00:06.234 [Pipeline] wrap 00:00:06.244 [Pipeline] { 00:00:06.251 [Pipeline] stage 00:00:06.253 [Pipeline] { (Prologue) 00:00:06.274 [Pipeline] echo 00:00:06.276 Node: VM-host-SM17 00:00:06.284 [Pipeline] cleanWs 00:00:06.295 [WS-CLEANUP] Deleting project workspace... 00:00:06.295 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.302 [WS-CLEANUP] done 00:00:06.607 [Pipeline] setCustomBuildProperty 00:00:06.677 [Pipeline] httpRequest 00:00:07.258 [Pipeline] echo 00:00:07.260 Sorcerer 10.211.164.20 is alive 00:00:07.268 [Pipeline] retry 00:00:07.269 [Pipeline] { 00:00:07.282 [Pipeline] httpRequest 00:00:07.286 HttpMethod: GET 00:00:07.287 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.288 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.300 Response Code: HTTP/1.1 200 OK 00:00:07.301 Success: Status code 200 is in the accepted range: 200,404 00:00:07.302 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.875 [Pipeline] } 00:00:15.894 [Pipeline] // retry 00:00:15.903 [Pipeline] sh 00:00:16.185 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.209 [Pipeline] httpRequest 00:00:16.618 [Pipeline] echo 00:00:16.621 Sorcerer 10.211.164.20 is alive 00:00:16.632 [Pipeline] retry 00:00:16.635 [Pipeline] { 00:00:16.650 [Pipeline] httpRequest 00:00:16.686 HttpMethod: GET 00:00:16.687 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:16.687 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:16.711 Response Code: HTTP/1.1 200 OK 00:00:16.712 Success: Status code 200 is in the accepted range: 200,404 00:00:16.712 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:02:05.736 [Pipeline] } 00:02:05.757 [Pipeline] // retry 00:02:05.767 [Pipeline] sh 00:02:06.052 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:02:09.489 [Pipeline] sh 00:02:09.770 + git -C spdk log --oneline -n5 00:02:09.770 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:09.770 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:09.770 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:09.770 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:02:09.770 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:09.790 [Pipeline] withCredentials 00:02:09.802 > git --version # timeout=10 00:02:09.817 > git --version # 'git version 2.39.2' 00:02:09.834 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:09.837 [Pipeline] { 00:02:09.844 [Pipeline] retry 00:02:09.846 [Pipeline] { 00:02:09.860 [Pipeline] sh 00:02:10.142 + git ls-remote http://dpdk.org/git/dpdk main 00:02:10.721 [Pipeline] } 00:02:10.741 [Pipeline] // retry 00:02:10.748 [Pipeline] } 00:02:10.768 [Pipeline] // withCredentials 00:02:10.779 [Pipeline] httpRequest 00:02:11.230 [Pipeline] echo 00:02:11.232 Sorcerer 10.211.164.20 is alive 00:02:11.241 [Pipeline] retry 00:02:11.243 [Pipeline] { 00:02:11.256 [Pipeline] httpRequest 00:02:11.261 HttpMethod: GET 00:02:11.262 URL: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:11.262 Sending request to url: http://10.211.164.20/packages/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:11.266 Response Code: HTTP/1.1 200 OK 00:02:11.266 Success: Status code 200 is in the accepted range: 200,404 00:02:11.267 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:15.277 [Pipeline] } 00:02:15.292 [Pipeline] // retry 00:02:15.298 [Pipeline] sh 00:02:15.574 + tar --no-same-owner -xf dpdk_4843aacb0d1201fef37e8a579fcd8baec4acdf98.tar.gz 00:02:17.489 [Pipeline] sh 00:02:17.770 + git -C dpdk log --oneline -n5 00:02:17.770 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:17.770 a4f455560f version: 24.11-rc4 00:02:17.770 0c81db5870 dts: remove leftover node methods 00:02:17.770 71eae7fe3e doc: correct definition of stats per queue feature 00:02:17.770 f2b1510f19 net/octeon_ep: replace use of word segregate 00:02:17.788 [Pipeline] writeFile 00:02:17.804 [Pipeline] sh 00:02:18.084 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:18.096 [Pipeline] sh 00:02:18.375 + cat autorun-spdk.conf 00:02:18.375 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.375 SPDK_TEST_NVMF=1 00:02:18.375 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.375 SPDK_TEST_URING=1 00:02:18.375 SPDK_TEST_USDT=1 00:02:18.375 SPDK_RUN_UBSAN=1 00:02:18.375 NET_TYPE=virt 00:02:18.375 SPDK_TEST_NATIVE_DPDK=main 00:02:18.375 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:18.375 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:18.381 RUN_NIGHTLY=1 00:02:18.383 [Pipeline] } 00:02:18.396 [Pipeline] // stage 00:02:18.412 [Pipeline] stage 00:02:18.415 [Pipeline] { (Run VM) 00:02:18.429 [Pipeline] sh 00:02:18.710 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:18.710 + echo 'Start stage prepare_nvme.sh' 00:02:18.710 Start stage prepare_nvme.sh 00:02:18.710 + [[ -n 6 ]] 00:02:18.710 + disk_prefix=ex6 00:02:18.710 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:18.710 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:18.710 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:18.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.710 ++ SPDK_TEST_NVMF=1 00:02:18.710 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.710 ++ SPDK_TEST_URING=1 00:02:18.710 ++ SPDK_TEST_USDT=1 00:02:18.710 ++ SPDK_RUN_UBSAN=1 00:02:18.710 ++ NET_TYPE=virt 00:02:18.710 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:18.710 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:18.710 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:18.710 ++ RUN_NIGHTLY=1 00:02:18.710 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:18.710 + nvme_files=() 00:02:18.710 + declare -A nvme_files 00:02:18.710 + backend_dir=/var/lib/libvirt/images/backends 00:02:18.710 + nvme_files['nvme.img']=5G 00:02:18.710 + nvme_files['nvme-cmb.img']=5G 00:02:18.710 + nvme_files['nvme-multi0.img']=4G 00:02:18.710 + nvme_files['nvme-multi1.img']=4G 00:02:18.710 + nvme_files['nvme-multi2.img']=4G 00:02:18.710 + nvme_files['nvme-openstack.img']=8G 00:02:18.710 + nvme_files['nvme-zns.img']=5G 00:02:18.710 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:18.710 + (( SPDK_TEST_FTL == 1 )) 00:02:18.710 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:18.710 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:18.710 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:18.710 + for nvme in "${!nvme_files[@]}" 00:02:18.710 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:18.968 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:18.968 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:18.968 + echo 'End stage prepare_nvme.sh' 00:02:18.968 End stage prepare_nvme.sh 00:02:18.980 [Pipeline] sh 00:02:19.262 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:19.262 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:19.262 00:02:19.262 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:19.262 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:19.262 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:19.262 HELP=0 00:02:19.262 DRY_RUN=0 00:02:19.262 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:19.262 NVME_DISKS_TYPE=nvme,nvme, 00:02:19.262 NVME_AUTO_CREATE=0 00:02:19.262 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:19.262 NVME_CMB=,, 00:02:19.262 NVME_PMR=,, 00:02:19.262 NVME_ZNS=,, 00:02:19.262 NVME_MS=,, 00:02:19.262 NVME_FDP=,, 00:02:19.262 SPDK_VAGRANT_DISTRO=fedora39 00:02:19.262 SPDK_VAGRANT_VMCPU=10 00:02:19.262 SPDK_VAGRANT_VMRAM=12288 00:02:19.262 SPDK_VAGRANT_PROVIDER=libvirt 00:02:19.262 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:19.262 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:19.262 SPDK_OPENSTACK_NETWORK=0 00:02:19.262 VAGRANT_PACKAGE_BOX=0 00:02:19.262 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:19.262 FORCE_DISTRO=true 00:02:19.262 VAGRANT_BOX_VERSION= 00:02:19.262 EXTRA_VAGRANTFILES= 00:02:19.262 NIC_MODEL=e1000 00:02:19.262 00:02:19.262 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:19.262 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:22.546 Bringing machine 'default' up with 'libvirt' provider... 00:02:23.113 ==> default: Creating image (snapshot of base box volume). 00:02:23.371 ==> default: Creating domain with the following settings... 00:02:23.371 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732793572_df768d7c88c78541f4c8 00:02:23.371 ==> default: -- Domain type: kvm 00:02:23.371 ==> default: -- Cpus: 10 00:02:23.371 ==> default: -- Feature: acpi 00:02:23.371 ==> default: -- Feature: apic 00:02:23.371 ==> default: -- Feature: pae 00:02:23.371 ==> default: -- Memory: 12288M 00:02:23.371 ==> default: -- Memory Backing: hugepages: 00:02:23.371 ==> default: -- Management MAC: 00:02:23.371 ==> default: -- Loader: 00:02:23.371 ==> default: -- Nvram: 00:02:23.371 ==> default: -- Base box: spdk/fedora39 00:02:23.371 ==> default: -- Storage pool: default 00:02:23.371 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732793572_df768d7c88c78541f4c8.img (20G) 00:02:23.371 ==> default: -- Volume Cache: default 00:02:23.371 ==> default: -- Kernel: 00:02:23.371 ==> default: -- Initrd: 00:02:23.371 ==> default: -- Graphics Type: vnc 00:02:23.371 ==> default: -- Graphics Port: -1 00:02:23.371 ==> default: -- Graphics IP: 127.0.0.1 00:02:23.371 ==> default: -- Graphics Password: Not defined 00:02:23.371 ==> default: -- Video Type: cirrus 00:02:23.371 ==> default: -- Video VRAM: 9216 00:02:23.371 ==> default: -- Sound Type: 00:02:23.371 ==> default: -- Keymap: en-us 00:02:23.372 ==> default: -- TPM Path: 00:02:23.372 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:23.372 ==> default: -- Command line args: 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:23.372 ==> default: -> value=-drive, 00:02:23.372 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:23.372 ==> default: -> value=-drive, 00:02:23.372 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.372 ==> default: -> value=-drive, 00:02:23.372 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.372 ==> default: -> value=-drive, 00:02:23.372 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:23.372 ==> default: -> value=-device, 00:02:23.372 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:23.372 ==> default: Creating shared folders metadata... 00:02:23.372 ==> default: Starting domain. 00:02:24.750 ==> default: Waiting for domain to get an IP address... 00:02:39.706 ==> default: Waiting for SSH to become available... 00:02:41.082 ==> default: Configuring and enabling network interfaces... 00:02:45.274 default: SSH address: 192.168.121.117:22 00:02:45.274 default: SSH username: vagrant 00:02:45.274 default: SSH auth method: private key 00:02:47.178 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:55.295 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:59.484 ==> default: Mounting SSHFS shared folder... 00:03:01.388 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:01.388 ==> default: Checking Mount.. 00:03:02.765 ==> default: Folder Successfully Mounted! 00:03:02.765 ==> default: Running provisioner: file... 00:03:03.328 default: ~/.gitconfig => .gitconfig 00:03:03.946 00:03:03.946 SUCCESS! 00:03:03.946 00:03:03.946 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:03.946 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:03.946 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:03.946 00:03:03.955 [Pipeline] } 00:03:03.975 [Pipeline] // stage 00:03:03.986 [Pipeline] dir 00:03:03.986 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:03.988 [Pipeline] { 00:03:04.004 [Pipeline] catchError 00:03:04.007 [Pipeline] { 00:03:04.020 [Pipeline] sh 00:03:04.298 + vagrant ssh-config --host vagrant 00:03:04.298 + sed -ne /^Host/,$p 00:03:04.298 + tee ssh_conf 00:03:08.493 Host vagrant 00:03:08.493 HostName 192.168.121.117 00:03:08.493 User vagrant 00:03:08.493 Port 22 00:03:08.493 UserKnownHostsFile /dev/null 00:03:08.493 StrictHostKeyChecking no 00:03:08.493 PasswordAuthentication no 00:03:08.493 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:08.493 IdentitiesOnly yes 00:03:08.493 LogLevel FATAL 00:03:08.493 ForwardAgent yes 00:03:08.493 ForwardX11 yes 00:03:08.493 00:03:08.507 [Pipeline] withEnv 00:03:08.510 [Pipeline] { 00:03:08.526 [Pipeline] sh 00:03:08.805 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:08.805 source /etc/os-release 00:03:08.805 [[ -e /image.version ]] && img=$(< /image.version) 00:03:08.805 # Minimal, systemd-like check. 00:03:08.805 if [[ -e /.dockerenv ]]; then 00:03:08.805 # Clear garbage from the node's name: 00:03:08.805 # agt-er_autotest_547-896 -> autotest_547-896 00:03:08.805 # $HOSTNAME is the actual container id 00:03:08.805 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:08.805 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:08.805 # We can assume this is a mount from a host where container is running, 00:03:08.805 # so fetch its hostname to easily identify the target swarm worker. 00:03:08.805 container="$(< /etc/hostname) ($agent)" 00:03:08.805 else 00:03:08.805 # Fallback 00:03:08.805 container=$agent 00:03:08.805 fi 00:03:08.805 fi 00:03:08.805 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:08.805 00:03:09.072 [Pipeline] } 00:03:09.088 [Pipeline] // withEnv 00:03:09.096 [Pipeline] setCustomBuildProperty 00:03:09.110 [Pipeline] stage 00:03:09.113 [Pipeline] { (Tests) 00:03:09.130 [Pipeline] sh 00:03:09.450 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:09.462 [Pipeline] sh 00:03:09.739 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:09.751 [Pipeline] timeout 00:03:09.752 Timeout set to expire in 1 hr 0 min 00:03:09.754 [Pipeline] { 00:03:09.767 [Pipeline] sh 00:03:10.045 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:10.611 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:03:10.624 [Pipeline] sh 00:03:10.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:11.175 [Pipeline] sh 00:03:11.452 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:11.727 [Pipeline] sh 00:03:12.007 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:12.266 ++ readlink -f spdk_repo 00:03:12.266 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:12.266 + [[ -n /home/vagrant/spdk_repo ]] 00:03:12.266 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:12.266 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:12.266 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:12.266 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:12.266 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:12.266 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:12.266 + cd /home/vagrant/spdk_repo 00:03:12.266 + source /etc/os-release 00:03:12.266 ++ NAME='Fedora Linux' 00:03:12.266 ++ VERSION='39 (Cloud Edition)' 00:03:12.266 ++ ID=fedora 00:03:12.266 ++ VERSION_ID=39 00:03:12.266 ++ VERSION_CODENAME= 00:03:12.266 ++ PLATFORM_ID=platform:f39 00:03:12.266 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:12.266 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:12.266 ++ LOGO=fedora-logo-icon 00:03:12.266 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:12.266 ++ HOME_URL=https://fedoraproject.org/ 00:03:12.266 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:12.266 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:12.266 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:12.266 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:12.266 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:12.266 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:12.266 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:12.266 ++ SUPPORT_END=2024-11-12 00:03:12.266 ++ VARIANT='Cloud Edition' 00:03:12.266 ++ VARIANT_ID=cloud 00:03:12.266 + uname -a 00:03:12.266 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:12.266 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:12.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.834 Hugepages 00:03:12.834 node hugesize free / total 00:03:12.834 node0 1048576kB 0 / 0 00:03:12.834 node0 2048kB 0 / 0 00:03:12.834 00:03:12.834 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.834 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:12.834 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:12.834 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:12.834 + rm -f /tmp/spdk-ld-path 00:03:12.834 + source autorun-spdk.conf 00:03:12.834 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:12.834 ++ SPDK_TEST_NVMF=1 00:03:12.834 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:12.834 ++ SPDK_TEST_URING=1 00:03:12.834 ++ SPDK_TEST_USDT=1 00:03:12.834 ++ SPDK_RUN_UBSAN=1 00:03:12.834 ++ NET_TYPE=virt 00:03:12.834 ++ SPDK_TEST_NATIVE_DPDK=main 00:03:12.834 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:12.834 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:12.834 ++ RUN_NIGHTLY=1 00:03:12.834 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:12.834 + [[ -n '' ]] 00:03:12.834 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:12.834 + for M in /var/spdk/build-*-manifest.txt 00:03:12.834 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:12.834 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:12.834 + for M in /var/spdk/build-*-manifest.txt 00:03:12.834 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:12.834 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:12.834 + for M in /var/spdk/build-*-manifest.txt 00:03:12.834 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:12.834 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:12.834 ++ uname 00:03:12.834 + [[ Linux == \L\i\n\u\x ]] 00:03:12.834 + sudo dmesg -T 00:03:12.834 + sudo dmesg --clear 00:03:12.834 + dmesg_pid=5939 00:03:12.834 + [[ Fedora Linux == FreeBSD ]] 00:03:12.834 + sudo dmesg -Tw 00:03:12.834 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:12.834 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:12.834 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:12.834 + [[ -x /usr/src/fio-static/fio ]] 00:03:12.834 + export FIO_BIN=/usr/src/fio-static/fio 00:03:12.834 + FIO_BIN=/usr/src/fio-static/fio 00:03:12.834 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:12.834 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:12.834 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:12.834 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:12.834 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:12.834 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:12.834 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:12.834 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:12.834 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:12.834 11:33:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:12.834 11:33:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:12.834 11:33:42 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:13.094 11:33:42 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NATIVE_DPDK=main 00:03:13.094 11:33:42 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:13.094 11:33:42 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:13.094 11:33:42 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:03:13.094 11:33:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:13.094 11:33:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:13.094 11:33:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:13.094 11:33:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:13.094 11:33:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:13.094 11:33:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:13.094 11:33:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.094 11:33:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.094 11:33:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.094 11:33:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.094 11:33:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.094 11:33:43 -- paths/export.sh@5 -- $ export PATH 00:03:13.094 11:33:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.094 11:33:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:13.094 11:33:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:13.094 11:33:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732793623.XXXXXX 00:03:13.094 11:33:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732793623.MOnPlZ 00:03:13.094 11:33:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:13.094 11:33:43 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:03:13.094 11:33:43 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:13.094 11:33:43 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:13.094 11:33:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:13.095 11:33:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:13.095 11:33:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:13.095 11:33:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:13.095 11:33:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.095 11:33:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:03:13.095 11:33:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:13.095 11:33:43 -- pm/common@17 -- $ local monitor 00:03:13.095 11:33:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.095 11:33:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.095 11:33:43 -- pm/common@25 -- $ sleep 1 00:03:13.095 11:33:43 -- pm/common@21 -- $ date +%s 00:03:13.095 11:33:43 -- pm/common@21 -- $ date +%s 00:03:13.095 11:33:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732793623 00:03:13.095 11:33:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732793623 00:03:13.095 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732793623_collect-vmstat.pm.log 00:03:13.095 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732793623_collect-cpu-load.pm.log 00:03:14.032 11:33:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:14.032 11:33:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:14.032 11:33:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:14.032 11:33:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.032 11:33:44 -- spdk/autobuild.sh@16 -- $ date -u 00:03:14.032 Thu Nov 28 11:33:44 AM UTC 2024 00:03:14.032 11:33:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:14.032 v25.01-pre-276-g35cd3e84d 00:03:14.032 11:33:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:14.032 11:33:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:14.032 11:33:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:14.032 11:33:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:14.032 11:33:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:14.032 11:33:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.032 ************************************ 00:03:14.032 START TEST ubsan 00:03:14.032 ************************************ 00:03:14.032 using ubsan 00:03:14.032 11:33:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:14.032 00:03:14.032 real 0m0.000s 00:03:14.032 user 0m0.000s 00:03:14.032 sys 0m0.000s 00:03:14.032 11:33:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.032 ************************************ 00:03:14.032 END TEST ubsan 00:03:14.032 11:33:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:14.032 ************************************ 00:03:14.032 11:33:44 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:03:14.032 11:33:44 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:14.032 11:33:44 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:14.032 11:33:44 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:14.032 11:33:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:14.032 11:33:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.032 ************************************ 00:03:14.032 START TEST build_native_dpdk 00:03:14.032 ************************************ 00:03:14.032 11:33:44 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:14.032 11:33:44 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:14.032 11:33:44 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:14.032 11:33:44 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:14.032 11:33:44 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:14.032 11:33:44 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:14.033 11:33:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:14.292 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:03:14.292 a4f455560f version: 24.11-rc4 00:03:14.292 0c81db5870 dts: remove leftover node methods 00:03:14.292 71eae7fe3e doc: correct definition of stats per queue feature 00:03:14.292 f2b1510f19 net/octeon_ep: replace use of word segregate 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:14.292 11:33:44 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:14.292 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:14.293 patching file config/rte_config.h 00:03:14.293 Hunk #1 succeeded at 72 (offset 13 lines). 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:14.293 11:33:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:03:14.293 patching file drivers/bus/pci/linux/pci_uio.c 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:14.293 11:33:44 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:20.863 The Meson build system 00:03:20.863 Version: 1.5.0 00:03:20.863 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:20.863 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:20.863 Build type: native build 00:03:20.863 Project name: DPDK 00:03:20.863 Project version: 24.11.0-rc4 00:03:20.863 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:20.863 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:20.863 Host machine cpu family: x86_64 00:03:20.863 Host machine cpu: x86_64 00:03:20.863 Message: ## Building in Developer Mode ## 00:03:20.863 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:20.863 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:20.863 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:20.863 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:03:20.863 Program cat found: YES (/usr/bin/cat) 00:03:20.863 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:20.863 Compiler for C supports arguments -march=native: YES 00:03:20.863 Checking for size of "void *" : 8 00:03:20.863 Checking for size of "void *" : 8 (cached) 00:03:20.863 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:20.863 Library m found: YES 00:03:20.863 Library numa found: YES 00:03:20.863 Has header "numaif.h" : YES 00:03:20.863 Library fdt found: NO 00:03:20.863 Library execinfo found: NO 00:03:20.863 Has header "execinfo.h" : YES 00:03:20.863 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:20.863 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:20.863 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:20.863 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:20.863 Run-time dependency openssl found: YES 3.1.1 00:03:20.863 Run-time dependency libpcap found: YES 1.10.4 00:03:20.863 Has header "pcap.h" with dependency libpcap: YES 00:03:20.863 Compiler for C supports arguments -Wcast-qual: YES 00:03:20.863 Compiler for C supports arguments -Wdeprecated: YES 00:03:20.863 Compiler for C supports arguments -Wformat: YES 00:03:20.863 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:20.863 Compiler for C supports arguments -Wformat-security: NO 00:03:20.863 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.863 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:20.863 Compiler for C supports arguments -Wnested-externs: YES 00:03:20.863 Compiler for C supports arguments -Wold-style-definition: YES 00:03:20.863 Compiler for C supports arguments -Wpointer-arith: YES 00:03:20.863 Compiler for C supports arguments -Wsign-compare: YES 00:03:20.863 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:20.863 Compiler for C supports arguments -Wundef: YES 00:03:20.863 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.863 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:20.863 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.863 Program objdump found: YES (/usr/bin/objdump) 00:03:20.863 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:03:20.863 Checking if "AVX512 checking" compiles: YES 00:03:20.863 Fetching value of define "__AVX512F__" : (undefined) 00:03:20.863 Fetching value of define "__SSE4_2__" : 1 00:03:20.863 Fetching value of define "__AES__" : 1 00:03:20.863 Fetching value of define "__AVX__" : 1 00:03:20.863 Fetching value of define "__AVX2__" : 1 00:03:20.863 Fetching value of define "__AVX512BW__" : (undefined) 00:03:20.863 Fetching value of define "__AVX512CD__" : (undefined) 00:03:20.863 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:20.863 Fetching value of define "__AVX512F__" : (undefined) 00:03:20.863 Fetching value of define "__AVX512VL__" : (undefined) 00:03:20.863 Fetching value of define "__PCLMUL__" : 1 00:03:20.863 Fetching value of define "__RDRND__" : 1 00:03:20.863 Fetching value of define "__RDSEED__" : 1 00:03:20.863 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:20.863 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:20.863 Message: lib/log: Defining dependency "log" 00:03:20.863 Message: lib/kvargs: Defining dependency "kvargs" 00:03:20.863 Message: lib/argparse: Defining dependency "argparse" 00:03:20.863 Message: lib/telemetry: Defining dependency "telemetry" 00:03:20.863 Checking for function "pthread_attr_setaffinity_np" : YES 00:03:20.863 Checking for function "getentropy" : NO 00:03:20.863 Message: lib/eal: Defining dependency "eal" 00:03:20.863 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:03:20.863 Message: lib/ring: Defining dependency "ring" 00:03:20.863 Message: lib/rcu: Defining dependency "rcu" 00:03:20.863 Message: lib/mempool: Defining dependency "mempool" 00:03:20.863 Message: lib/mbuf: Defining dependency "mbuf" 00:03:20.863 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:20.863 Compiler for C supports arguments -mpclmul: YES 00:03:20.863 Compiler for C supports arguments -maes: YES 00:03:20.863 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:20.863 Message: lib/net: Defining dependency "net" 00:03:20.863 Message: lib/meter: Defining dependency "meter" 00:03:20.863 Message: lib/ethdev: Defining dependency "ethdev" 00:03:20.863 Message: lib/pci: Defining dependency "pci" 00:03:20.863 Message: lib/cmdline: Defining dependency "cmdline" 00:03:20.863 Message: lib/metrics: Defining dependency "metrics" 00:03:20.863 Message: lib/hash: Defining dependency "hash" 00:03:20.863 Message: lib/timer: Defining dependency "timer" 00:03:20.863 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:20.863 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:20.863 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:20.863 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:20.863 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:20.863 Message: lib/acl: Defining dependency "acl" 00:03:20.863 Message: lib/bbdev: Defining dependency "bbdev" 00:03:20.863 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:20.863 Run-time dependency libelf found: YES 0.191 00:03:20.863 Message: lib/bpf: Defining dependency "bpf" 00:03:20.863 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:20.863 Message: lib/compressdev: Defining dependency "compressdev" 00:03:20.863 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:20.863 Message: lib/distributor: Defining dependency "distributor" 00:03:20.863 Message: lib/dmadev: Defining dependency "dmadev" 00:03:20.864 Message: lib/efd: Defining dependency "efd" 00:03:20.864 Message: lib/eventdev: Defining dependency "eventdev" 00:03:20.864 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:20.864 Message: lib/gpudev: Defining dependency "gpudev" 00:03:20.864 Message: lib/gro: Defining dependency "gro" 00:03:20.864 Message: lib/gso: Defining dependency "gso" 00:03:20.864 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:20.864 Message: lib/jobstats: Defining dependency "jobstats" 00:03:20.864 Message: lib/latencystats: Defining dependency "latencystats" 00:03:20.864 Message: lib/lpm: Defining dependency "lpm" 00:03:20.864 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:20.864 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:20.864 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:20.864 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:20.864 Message: lib/member: Defining dependency "member" 00:03:20.864 Message: lib/pcapng: Defining dependency "pcapng" 00:03:20.864 Message: lib/power: Defining dependency "power" 00:03:20.864 Message: lib/rawdev: Defining dependency "rawdev" 00:03:20.864 Message: lib/regexdev: Defining dependency "regexdev" 00:03:20.864 Message: lib/mldev: Defining dependency "mldev" 00:03:20.864 Message: lib/rib: Defining dependency "rib" 00:03:20.864 Message: lib/reorder: Defining dependency "reorder" 00:03:20.864 Message: lib/sched: Defining dependency "sched" 00:03:20.864 Message: lib/security: Defining dependency "security" 00:03:20.864 Message: lib/stack: Defining dependency "stack" 00:03:20.864 Has header "linux/userfaultfd.h" : YES 00:03:20.864 Has header "linux/vduse.h" : YES 00:03:20.864 Message: lib/vhost: Defining dependency "vhost" 00:03:20.864 Message: lib/ipsec: Defining dependency "ipsec" 00:03:20.864 Message: lib/pdcp: Defining dependency "pdcp" 00:03:20.864 Message: lib/fib: Defining dependency "fib" 00:03:20.864 Message: lib/port: Defining dependency "port" 00:03:20.864 Message: lib/pdump: Defining dependency "pdump" 00:03:20.864 Message: lib/table: Defining dependency "table" 00:03:20.864 Message: lib/pipeline: Defining dependency "pipeline" 00:03:20.864 Message: lib/graph: Defining dependency "graph" 00:03:20.864 Message: lib/node: Defining dependency "node" 00:03:20.864 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:20.864 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:20.864 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:20.864 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:20.864 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:20.864 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:20.864 Compiler for C supports arguments -Wno-unused-value: YES 00:03:20.864 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:20.864 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:20.864 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:20.864 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:20.864 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:20.864 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:03:20.864 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:03:20.864 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:03:20.864 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:03:20.864 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:03:20.864 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:03:20.864 Has header "sys/epoll.h" : YES 00:03:20.864 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:20.864 Configuring doxy-api-html.conf using configuration 00:03:20.864 Configuring doxy-api-man.conf using configuration 00:03:20.864 Program mandb found: YES (/usr/bin/mandb) 00:03:20.864 Program sphinx-build found: NO 00:03:20.864 Program sphinx-build found: NO 00:03:20.864 Configuring rte_build_config.h using configuration 00:03:20.864 Message: 00:03:20.864 ================= 00:03:20.864 Applications Enabled 00:03:20.864 ================= 00:03:20.864 00:03:20.864 apps: 00:03:20.864 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:20.864 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:20.864 test-pmd, test-regex, test-sad, test-security-perf, 00:03:20.864 00:03:20.864 Message: 00:03:20.864 ================= 00:03:20.864 Libraries Enabled 00:03:20.864 ================= 00:03:20.864 00:03:20.864 libs: 00:03:20.864 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:20.864 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:20.864 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:20.864 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:20.864 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:20.864 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:20.864 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:20.864 graph, node, 00:03:20.864 00:03:20.864 Message: 00:03:20.864 =============== 00:03:20.864 Drivers Enabled 00:03:20.864 =============== 00:03:20.864 00:03:20.864 common: 00:03:20.864 00:03:20.864 bus: 00:03:20.864 pci, vdev, 00:03:20.864 mempool: 00:03:20.864 ring, 00:03:20.864 dma: 00:03:20.864 00:03:20.864 net: 00:03:20.864 i40e, 00:03:20.864 raw: 00:03:20.864 00:03:20.864 crypto: 00:03:20.864 00:03:20.864 compress: 00:03:20.864 00:03:20.864 regex: 00:03:20.864 00:03:20.864 ml: 00:03:20.864 00:03:20.864 vdpa: 00:03:20.864 00:03:20.864 event: 00:03:20.864 00:03:20.864 baseband: 00:03:20.864 00:03:20.864 gpu: 00:03:20.864 00:03:20.864 power: 00:03:20.864 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:03:20.864 00:03:20.864 Message: 00:03:20.864 ================= 00:03:20.864 Content Skipped 00:03:20.864 ================= 00:03:20.864 00:03:20.864 apps: 00:03:20.864 00:03:20.864 libs: 00:03:20.864 00:03:20.864 drivers: 00:03:20.864 common/cpt: not in enabled drivers build config 00:03:20.864 common/dpaax: not in enabled drivers build config 00:03:20.864 common/iavf: not in enabled drivers build config 00:03:20.864 common/idpf: not in enabled drivers build config 00:03:20.864 common/ionic: not in enabled drivers build config 00:03:20.864 common/mvep: not in enabled drivers build config 00:03:20.864 common/octeontx: not in enabled drivers build config 00:03:20.864 bus/auxiliary: not in enabled drivers build config 00:03:20.864 bus/cdx: not in enabled drivers build config 00:03:20.864 bus/dpaa: not in enabled drivers build config 00:03:20.864 bus/fslmc: not in enabled drivers build config 00:03:20.864 bus/ifpga: not in enabled drivers build config 00:03:20.864 bus/platform: not in enabled drivers build config 00:03:20.864 bus/uacce: not in enabled drivers build config 00:03:20.864 bus/vmbus: not in enabled drivers build config 00:03:20.864 common/cnxk: not in enabled drivers build config 00:03:20.864 common/mlx5: not in enabled drivers build config 00:03:20.864 common/nfp: not in enabled drivers build config 00:03:20.864 common/nitrox: not in enabled drivers build config 00:03:20.864 common/qat: not in enabled drivers build config 00:03:20.864 common/sfc_efx: not in enabled drivers build config 00:03:20.864 mempool/bucket: not in enabled drivers build config 00:03:20.864 mempool/cnxk: not in enabled drivers build config 00:03:20.864 mempool/dpaa: not in enabled drivers build config 00:03:20.864 mempool/dpaa2: not in enabled drivers build config 00:03:20.864 mempool/octeontx: not in enabled drivers build config 00:03:20.864 mempool/stack: not in enabled drivers build config 00:03:20.864 dma/cnxk: not in enabled drivers build config 00:03:20.864 dma/dpaa: not in enabled drivers build config 00:03:20.864 dma/dpaa2: not in enabled drivers build config 00:03:20.864 dma/hisilicon: not in enabled drivers build config 00:03:20.864 dma/idxd: not in enabled drivers build config 00:03:20.864 dma/ioat: not in enabled drivers build config 00:03:20.864 dma/odm: not in enabled drivers build config 00:03:20.864 dma/skeleton: not in enabled drivers build config 00:03:20.864 net/af_packet: not in enabled drivers build config 00:03:20.864 net/af_xdp: not in enabled drivers build config 00:03:20.864 net/ark: not in enabled drivers build config 00:03:20.864 net/atlantic: not in enabled drivers build config 00:03:20.864 net/avp: not in enabled drivers build config 00:03:20.864 net/axgbe: not in enabled drivers build config 00:03:20.864 net/bnx2x: not in enabled drivers build config 00:03:20.864 net/bnxt: not in enabled drivers build config 00:03:20.864 net/bonding: not in enabled drivers build config 00:03:20.864 net/cnxk: not in enabled drivers build config 00:03:20.864 net/cpfl: not in enabled drivers build config 00:03:20.864 net/cxgbe: not in enabled drivers build config 00:03:20.864 net/dpaa: not in enabled drivers build config 00:03:20.864 net/dpaa2: not in enabled drivers build config 00:03:20.864 net/e1000: not in enabled drivers build config 00:03:20.864 net/ena: not in enabled drivers build config 00:03:20.864 net/enetc: not in enabled drivers build config 00:03:20.864 net/enetfec: not in enabled drivers build config 00:03:20.864 net/enic: not in enabled drivers build config 00:03:20.864 net/failsafe: not in enabled drivers build config 00:03:20.864 net/fm10k: not in enabled drivers build config 00:03:20.864 net/gve: not in enabled drivers build config 00:03:20.864 net/hinic: not in enabled drivers build config 00:03:20.864 net/hns3: not in enabled drivers build config 00:03:20.864 net/iavf: not in enabled drivers build config 00:03:20.864 net/ice: not in enabled drivers build config 00:03:20.864 net/idpf: not in enabled drivers build config 00:03:20.864 net/igc: not in enabled drivers build config 00:03:20.864 net/ionic: not in enabled drivers build config 00:03:20.864 net/ipn3ke: not in enabled drivers build config 00:03:20.864 net/ixgbe: not in enabled drivers build config 00:03:20.864 net/mana: not in enabled drivers build config 00:03:20.864 net/memif: not in enabled drivers build config 00:03:20.864 net/mlx4: not in enabled drivers build config 00:03:20.864 net/mlx5: not in enabled drivers build config 00:03:20.864 net/mvneta: not in enabled drivers build config 00:03:20.864 net/mvpp2: not in enabled drivers build config 00:03:20.864 net/netvsc: not in enabled drivers build config 00:03:20.864 net/nfb: not in enabled drivers build config 00:03:20.864 net/nfp: not in enabled drivers build config 00:03:20.864 net/ngbe: not in enabled drivers build config 00:03:20.864 net/ntnic: not in enabled drivers build config 00:03:20.864 net/null: not in enabled drivers build config 00:03:20.864 net/octeontx: not in enabled drivers build config 00:03:20.864 net/octeon_ep: not in enabled drivers build config 00:03:20.864 net/pcap: not in enabled drivers build config 00:03:20.864 net/pfe: not in enabled drivers build config 00:03:20.864 net/qede: not in enabled drivers build config 00:03:20.864 net/r8169: not in enabled drivers build config 00:03:20.864 net/ring: not in enabled drivers build config 00:03:20.864 net/sfc: not in enabled drivers build config 00:03:20.864 net/softnic: not in enabled drivers build config 00:03:20.865 net/tap: not in enabled drivers build config 00:03:20.865 net/thunderx: not in enabled drivers build config 00:03:20.865 net/txgbe: not in enabled drivers build config 00:03:20.865 net/vdev_netvsc: not in enabled drivers build config 00:03:20.865 net/vhost: not in enabled drivers build config 00:03:20.865 net/virtio: not in enabled drivers build config 00:03:20.865 net/vmxnet3: not in enabled drivers build config 00:03:20.865 net/zxdh: not in enabled drivers build config 00:03:20.865 raw/cnxk_bphy: not in enabled drivers build config 00:03:20.865 raw/cnxk_gpio: not in enabled drivers build config 00:03:20.865 raw/cnxk_rvu_lf: not in enabled drivers build config 00:03:20.865 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:20.865 raw/gdtc: not in enabled drivers build config 00:03:20.865 raw/ifpga: not in enabled drivers build config 00:03:20.865 raw/ntb: not in enabled drivers build config 00:03:20.865 raw/skeleton: not in enabled drivers build config 00:03:20.865 crypto/armv8: not in enabled drivers build config 00:03:20.865 crypto/bcmfs: not in enabled drivers build config 00:03:20.865 crypto/caam_jr: not in enabled drivers build config 00:03:20.865 crypto/ccp: not in enabled drivers build config 00:03:20.865 crypto/cnxk: not in enabled drivers build config 00:03:20.865 crypto/dpaa_sec: not in enabled drivers build config 00:03:20.865 crypto/dpaa2_sec: not in enabled drivers build config 00:03:20.865 crypto/ionic: not in enabled drivers build config 00:03:20.865 crypto/ipsec_mb: not in enabled drivers build config 00:03:20.865 crypto/mlx5: not in enabled drivers build config 00:03:20.865 crypto/mvsam: not in enabled drivers build config 00:03:20.865 crypto/nitrox: not in enabled drivers build config 00:03:20.865 crypto/null: not in enabled drivers build config 00:03:20.865 crypto/octeontx: not in enabled drivers build config 00:03:20.865 crypto/openssl: not in enabled drivers build config 00:03:20.865 crypto/scheduler: not in enabled drivers build config 00:03:20.865 crypto/uadk: not in enabled drivers build config 00:03:20.865 crypto/virtio: not in enabled drivers build config 00:03:20.865 compress/isal: not in enabled drivers build config 00:03:20.865 compress/mlx5: not in enabled drivers build config 00:03:20.865 compress/nitrox: not in enabled drivers build config 00:03:20.865 compress/octeontx: not in enabled drivers build config 00:03:20.865 compress/uadk: not in enabled drivers build config 00:03:20.865 compress/zlib: not in enabled drivers build config 00:03:20.865 regex/mlx5: not in enabled drivers build config 00:03:20.865 regex/cn9k: not in enabled drivers build config 00:03:20.865 ml/cnxk: not in enabled drivers build config 00:03:20.865 vdpa/ifc: not in enabled drivers build config 00:03:20.865 vdpa/mlx5: not in enabled drivers build config 00:03:20.865 vdpa/nfp: not in enabled drivers build config 00:03:20.865 vdpa/sfc: not in enabled drivers build config 00:03:20.865 event/cnxk: not in enabled drivers build config 00:03:20.865 event/dlb2: not in enabled drivers build config 00:03:20.865 event/dpaa: not in enabled drivers build config 00:03:20.865 event/dpaa2: not in enabled drivers build config 00:03:20.865 event/dsw: not in enabled drivers build config 00:03:20.865 event/opdl: not in enabled drivers build config 00:03:20.865 event/skeleton: not in enabled drivers build config 00:03:20.865 event/sw: not in enabled drivers build config 00:03:20.865 event/octeontx: not in enabled drivers build config 00:03:20.865 baseband/acc: not in enabled drivers build config 00:03:20.865 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:20.865 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:20.865 baseband/la12xx: not in enabled drivers build config 00:03:20.865 baseband/null: not in enabled drivers build config 00:03:20.865 baseband/turbo_sw: not in enabled drivers build config 00:03:20.865 gpu/cuda: not in enabled drivers build config 00:03:20.865 power/amd_uncore: not in enabled drivers build config 00:03:20.865 00:03:20.865 00:03:20.865 Message: DPDK build config complete: 00:03:20.865 source path = "/home/vagrant/spdk_repo/dpdk" 00:03:20.865 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:03:20.865 Build targets in project: 249 00:03:20.865 00:03:20.865 DPDK 24.11.0-rc4 00:03:20.865 00:03:20.865 User defined options 00:03:20.865 libdir : lib 00:03:20.865 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:20.865 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:20.865 c_link_args : 00:03:20.865 enable_docs : false 00:03:20.865 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:20.865 enable_kmods : false 00:03:21.433 machine : native 00:03:21.433 tests : false 00:03:21.433 00:03:21.433 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.433 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:21.433 11:33:51 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:21.692 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:21.692 [1/769] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:03:21.692 [2/769] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:03:21.692 [3/769] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:21.692 [4/769] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:03:21.692 [5/769] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:03:21.692 [6/769] Linking static target lib/librte_kvargs.a 00:03:21.951 [7/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:21.951 [8/769] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:21.951 [9/769] Linking static target lib/librte_log.a 00:03:21.951 [10/769] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:21.951 [11/769] Linking static target lib/librte_argparse.a 00:03:21.951 [12/769] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.210 [13/769] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.210 [14/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:22.210 [15/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:22.210 [16/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:22.469 [17/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:22.469 [18/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:22.469 [19/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:22.469 [20/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:22.469 [21/769] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.469 [22/769] Linking target lib/librte_log.so.25.0 00:03:22.740 [23/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:22.740 [24/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:23.013 [25/769] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:23.013 [26/769] Linking target lib/librte_kvargs.so.25.0 00:03:23.013 [27/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:23.013 [28/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:03:23.013 [29/769] Linking target lib/librte_argparse.so.25.0 00:03:23.013 [30/769] Linking static target lib/librte_telemetry.a 00:03:23.013 [31/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:23.013 [32/769] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:23.013 [33/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:23.013 [34/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:23.013 [35/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:23.272 [36/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:23.272 [37/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:23.272 [38/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:23.272 [39/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:23.531 [40/769] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.531 [41/769] Linking target lib/librte_telemetry.so.25.0 00:03:23.531 [42/769] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:23.790 [43/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:23.790 [44/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:23.790 [45/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:23.790 [46/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:23.790 [47/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:23.790 [48/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:23.790 [49/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:24.049 [50/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:24.049 [51/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:24.049 [52/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:24.308 [53/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:24.308 [54/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:03:24.308 [55/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:24.308 [56/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:24.567 [57/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:24.567 [58/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:24.567 [59/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:24.567 [60/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:24.567 [61/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:24.825 [62/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:24.825 [63/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:25.091 [64/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:25.091 [65/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:25.091 [66/769] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:25.091 [67/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:25.091 [68/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:25.091 [69/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:25.360 [70/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:25.360 [71/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:25.360 [72/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:25.620 [73/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:25.620 [74/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:25.620 [75/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:25.620 [76/769] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:25.878 [77/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:25.878 [78/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:25.878 [79/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:25.878 [80/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:25.878 [81/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:26.138 [82/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:26.138 [83/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:26.138 [84/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:26.138 [85/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:26.397 [86/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:26.397 [87/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:26.397 [88/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.655 [89/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:26.655 [90/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:26.655 [91/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:26.655 [92/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:26.655 [93/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:27.223 [94/769] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:27.223 [95/769] Linking static target lib/librte_ring.a 00:03:27.223 [96/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:27.223 [97/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:27.223 [98/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:27.223 [99/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:27.223 [100/769] Linking static target lib/librte_eal.a 00:03:27.223 [101/769] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:27.223 [102/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:27.481 [103/769] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.739 [104/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:27.739 [105/769] Linking static target lib/librte_mempool.a 00:03:27.739 [106/769] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:27.739 [107/769] Linking static target lib/librte_rcu.a 00:03:27.739 [108/769] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:27.739 [109/769] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:27.998 [110/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:27.998 [111/769] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:27.998 [112/769] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.998 [113/769] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:28.257 [114/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:28.257 [115/769] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:28.257 [116/769] Linking static target lib/librte_mbuf.a 00:03:28.257 [117/769] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:28.515 [118/769] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.515 [119/769] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:28.515 [120/769] Linking static target lib/librte_net.a 00:03:28.773 [121/769] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:28.773 [122/769] Linking static target lib/librte_meter.a 00:03:28.773 [123/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:28.773 [124/769] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.773 [125/769] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.773 [126/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:29.076 [127/769] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.076 [128/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:29.076 [129/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:29.654 [130/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:29.654 [131/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:29.912 [132/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:29.912 [133/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:29.912 [134/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:30.171 [135/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:30.171 [136/769] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:30.171 [137/769] Linking static target lib/librte_pci.a 00:03:30.171 [138/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:30.430 [139/769] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.430 [140/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:30.430 [141/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:30.430 [142/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:30.430 [143/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:30.430 [144/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:30.690 [145/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:30.690 [146/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:30.690 [147/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:30.690 [148/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:30.690 [149/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:30.690 [150/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:30.690 [151/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:30.690 [152/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:30.948 [153/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:31.207 [154/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:31.207 [155/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:31.207 [156/769] Linking static target lib/librte_cmdline.a 00:03:31.207 [157/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:31.466 [158/769] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:31.466 [159/769] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:31.466 [160/769] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:31.466 [161/769] Linking static target lib/librte_metrics.a 00:03:31.466 [162/769] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:31.724 [163/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:31.724 [164/769] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.983 [165/769] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.244 [166/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:03:32.244 [167/769] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.244 [168/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.244 [169/769] Linking static target lib/librte_timer.a 00:03:32.818 [170/769] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.818 [171/769] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:33.077 [172/769] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:33.077 [173/769] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:33.336 [174/769] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:33.596 [175/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.596 [176/769] Linking static target lib/librte_ethdev.a 00:03:33.855 [177/769] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:33.855 [178/769] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:33.855 [179/769] Linking static target lib/librte_hash.a 00:03:33.855 [180/769] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:33.855 [181/769] Linking static target lib/librte_bitratestats.a 00:03:33.855 [182/769] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.856 [183/769] Linking target lib/librte_eal.so.25.0 00:03:34.115 [184/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:34.115 [185/769] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.115 [186/769] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:34.115 [187/769] Linking target lib/librte_ring.so.25.0 00:03:34.115 [188/769] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:34.115 [189/769] Linking target lib/librte_meter.so.25.0 00:03:34.374 [190/769] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:34.374 [191/769] Linking target lib/librte_rcu.so.25.0 00:03:34.374 [192/769] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:34.374 [193/769] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:34.374 [194/769] Linking target lib/librte_mempool.so.25.0 00:03:34.374 [195/769] Linking target lib/librte_pci.so.25.0 00:03:34.374 [196/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:34.374 [197/769] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:34.633 [198/769] Linking static target lib/acl/libavx2_tmp.a 00:03:34.634 [199/769] Linking static target lib/librte_bbdev.a 00:03:34.634 [200/769] Linking target lib/librte_timer.so.25.0 00:03:34.634 [201/769] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:34.634 [202/769] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:34.634 [203/769] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.634 [204/769] Linking target lib/librte_mbuf.so.25.0 00:03:34.634 [205/769] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:34.634 [206/769] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:34.892 [207/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:34.892 [208/769] Linking target lib/librte_net.so.25.0 00:03:34.892 [209/769] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:34.892 [210/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:34.892 [211/769] Linking target lib/librte_cmdline.so.25.0 00:03:34.892 [212/769] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:34.892 [213/769] Linking static target lib/acl/libavx512_tmp.a 00:03:34.892 [214/769] Linking target lib/librte_hash.so.25.0 00:03:35.151 [215/769] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.152 [216/769] Linking target lib/librte_bbdev.so.25.0 00:03:35.152 [217/769] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:35.152 [218/769] Linking static target lib/librte_acl.a 00:03:35.152 [219/769] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:35.152 [220/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:35.410 [221/769] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:35.410 [222/769] Linking static target lib/librte_cfgfile.a 00:03:35.410 [223/769] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.410 [224/769] Linking target lib/librte_acl.so.25.0 00:03:35.669 [225/769] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:35.669 [226/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:35.669 [227/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:35.669 [228/769] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.669 [229/769] Linking target lib/librte_cfgfile.so.25.0 00:03:35.669 [230/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:35.927 [231/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:35.927 [232/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:35.927 [233/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:36.185 [234/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:36.185 [235/769] Linking static target lib/librte_bpf.a 00:03:36.444 [236/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:36.444 [237/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:36.444 [238/769] Linking static target lib/librte_compressdev.a 00:03:36.444 [239/769] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.444 [240/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:36.704 [241/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:36.704 [242/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:36.962 [243/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:36.962 [244/769] Linking static target lib/librte_distributor.a 00:03:36.962 [245/769] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.962 [246/769] Linking target lib/librte_compressdev.so.25.0 00:03:37.222 [247/769] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:37.222 [248/769] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:37.222 [249/769] Linking static target lib/librte_dmadev.a 00:03:37.222 [250/769] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.222 [251/769] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:37.222 [252/769] Linking target lib/librte_distributor.so.25.0 00:03:37.539 [253/769] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.539 [254/769] Linking target lib/librte_dmadev.so.25.0 00:03:37.797 [255/769] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:37.797 [256/769] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:38.056 [257/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:38.315 [258/769] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:38.315 [259/769] Linking static target lib/librte_efd.a 00:03:38.315 [260/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:38.315 [261/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:38.315 [262/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:38.315 [263/769] Linking static target lib/librte_cryptodev.a 00:03:38.573 [264/769] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.573 [265/769] Linking target lib/librte_efd.so.25.0 00:03:38.833 [266/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:38.833 [267/769] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:38.833 [268/769] Linking static target lib/librte_dispatcher.a 00:03:39.092 [269/769] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.092 [270/769] Linking target lib/librte_ethdev.so.25.0 00:03:39.092 [271/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:39.351 [272/769] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:39.351 [273/769] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:39.351 [274/769] Linking target lib/librte_metrics.so.25.0 00:03:39.351 [275/769] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:39.351 [276/769] Linking static target lib/librte_gpudev.a 00:03:39.351 [277/769] Linking target lib/librte_bpf.so.25.0 00:03:39.351 [278/769] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:39.351 [279/769] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:39.351 [280/769] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.351 [281/769] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:39.351 [282/769] Linking target lib/librte_bitratestats.so.25.0 00:03:39.610 [283/769] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:39.868 [284/769] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.868 [285/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:39.868 [286/769] Linking target lib/librte_cryptodev.so.25.0 00:03:39.868 [287/769] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:40.134 [288/769] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:40.134 [289/769] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:40.134 [290/769] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.134 [291/769] Linking target lib/librte_gpudev.so.25.0 00:03:40.134 [292/769] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:40.393 [293/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:40.393 [294/769] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:40.393 [295/769] Linking static target lib/librte_eventdev.a 00:03:40.393 [296/769] Linking static target lib/librte_gro.a 00:03:40.393 [297/769] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:40.393 [298/769] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:40.393 [299/769] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:40.651 [300/769] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:40.651 [301/769] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.651 [302/769] Linking target lib/librte_gro.so.25.0 00:03:40.651 [303/769] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:40.651 [304/769] Linking static target lib/librte_gso.a 00:03:40.910 [305/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:40.910 [306/769] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.910 [307/769] Linking target lib/librte_gso.so.25.0 00:03:41.168 [308/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:41.168 [309/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:41.168 [310/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:41.168 [311/769] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:41.168 [312/769] Linking static target lib/librte_jobstats.a 00:03:41.427 [313/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:41.427 [314/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:41.427 [315/769] Linking static target lib/librte_ip_frag.a 00:03:41.427 [316/769] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:41.427 [317/769] Linking static target lib/librte_latencystats.a 00:03:41.427 [318/769] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.685 [319/769] Linking target lib/librte_jobstats.so.25.0 00:03:41.685 [320/769] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.685 [321/769] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.685 [322/769] Linking target lib/librte_latencystats.so.25.0 00:03:41.685 [323/769] Linking target lib/librte_ip_frag.so.25.0 00:03:41.685 [324/769] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:41.685 [325/769] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:41.685 [326/769] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:41.944 [327/769] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:41.944 [328/769] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:41.944 [329/769] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:41.944 [330/769] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:03:42.203 [331/769] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:42.203 [332/769] Linking static target lib/librte_lpm.a 00:03:42.462 [333/769] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:03:42.462 [334/769] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.462 [335/769] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:42.462 [336/769] Linking target lib/librte_eventdev.so.25.0 00:03:42.462 [337/769] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:42.462 [338/769] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.720 [339/769] Linking target lib/librte_lpm.so.25.0 00:03:42.720 [340/769] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:42.720 [341/769] Linking target lib/librte_dispatcher.so.25.0 00:03:42.720 [342/769] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:42.720 [343/769] Linking static target lib/librte_power.a 00:03:42.720 [344/769] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:42.720 [345/769] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:42.720 [346/769] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:42.720 [347/769] Linking static target lib/librte_pcapng.a 00:03:42.979 [348/769] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:42.979 [349/769] Linking static target lib/librte_rawdev.a 00:03:42.979 [350/769] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.238 [351/769] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:43.238 [352/769] Linking target lib/librte_pcapng.so.25.0 00:03:43.238 [353/769] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:43.238 [354/769] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:43.238 [355/769] Linking static target lib/librte_regexdev.a 00:03:43.238 [356/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:43.238 [357/769] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.497 [358/769] Linking target lib/librte_rawdev.so.25.0 00:03:43.497 [359/769] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:43.497 [360/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:43.497 [361/769] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.756 [362/769] Linking target lib/librte_power.so.25.0 00:03:43.756 [363/769] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:43.756 [364/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:43.756 [365/769] Linking static target lib/librte_mldev.a 00:03:43.756 [366/769] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:43.756 [367/769] Linking static target lib/librte_member.a 00:03:43.756 [368/769] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:44.015 [369/769] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:44.015 [370/769] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.015 [371/769] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.015 [372/769] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:44.015 [373/769] Linking target lib/librte_regexdev.so.25.0 00:03:44.015 [374/769] Linking target lib/librte_member.so.25.0 00:03:44.274 [375/769] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:44.274 [376/769] Linking static target lib/librte_reorder.a 00:03:44.274 [377/769] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:44.274 [378/769] Linking static target lib/librte_rib.a 00:03:44.274 [379/769] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:44.533 [380/769] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.533 [381/769] Linking target lib/librte_reorder.so.25.0 00:03:44.533 [382/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:44.533 [383/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:44.792 [384/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:44.792 [385/769] Linking static target lib/librte_stack.a 00:03:44.792 [386/769] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.792 [387/769] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:44.792 [388/769] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:44.792 [389/769] Linking static target lib/librte_security.a 00:03:44.792 [390/769] Linking target lib/librte_rib.so.25.0 00:03:44.792 [391/769] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:44.792 [392/769] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.051 [393/769] Linking target lib/librte_stack.so.25.0 00:03:45.051 [394/769] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:45.051 [395/769] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:45.051 [396/769] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.308 [397/769] Linking target lib/librte_security.so.25.0 00:03:45.308 [398/769] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.308 [399/769] Linking target lib/librte_mldev.so.25.0 00:03:45.308 [400/769] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:45.308 [401/769] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:45.308 [402/769] Linking static target lib/librte_sched.a 00:03:45.566 [403/769] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:45.566 [404/769] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:45.841 [405/769] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.841 [406/769] Linking target lib/librte_sched.so.25.0 00:03:45.841 [407/769] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:45.841 [408/769] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:46.126 [409/769] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:46.384 [410/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:46.643 [411/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:46.643 [412/769] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:46.643 [413/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:46.901 [414/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:47.159 [415/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:47.159 [416/769] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:47.159 [417/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:47.159 [418/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:47.418 [419/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:47.418 [420/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:47.676 [421/769] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:47.676 [422/769] Linking static target lib/librte_ipsec.a 00:03:47.935 [423/769] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:47.935 [424/769] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:47.935 [425/769] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:47.935 [426/769] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:47.935 [427/769] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:47.935 [428/769] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:48.193 [429/769] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:48.193 [430/769] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.193 [431/769] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:48.193 [432/769] Linking target lib/librte_ipsec.so.25.0 00:03:48.193 [433/769] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:48.762 [434/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:48.762 [435/769] Linking static target lib/librte_pdcp.a 00:03:49.019 [436/769] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:49.019 [437/769] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:49.019 [438/769] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:49.019 [439/769] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:49.019 [440/769] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:49.278 [441/769] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.278 [442/769] Linking target lib/librte_pdcp.so.25.0 00:03:49.278 [443/769] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:49.278 [444/769] Linking static target lib/librte_fib.a 00:03:49.538 [445/769] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.797 [446/769] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:49.797 [447/769] Linking target lib/librte_fib.so.25.0 00:03:50.055 [448/769] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:50.055 [449/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:50.055 [450/769] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:50.055 [451/769] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:50.314 [452/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:50.314 [453/769] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:50.314 [454/769] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:50.882 [455/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:50.882 [456/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:50.882 [457/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:50.882 [458/769] Linking static target lib/librte_port.a 00:03:50.882 [459/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:51.141 [460/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:51.141 [461/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:51.399 [462/769] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:51.399 [463/769] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:51.399 [464/769] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:51.399 [465/769] Linking static target lib/librte_pdump.a 00:03:51.399 [466/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:51.399 [467/769] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.658 [468/769] Linking target lib/librte_port.so.25.0 00:03:51.658 [469/769] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:51.658 [470/769] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.658 [471/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:51.658 [472/769] Linking target lib/librte_pdump.so.25.0 00:03:51.658 [473/769] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:52.223 [474/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:52.223 [475/769] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:52.482 [476/769] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:52.482 [477/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:52.482 [478/769] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:52.482 [479/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:52.740 [480/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:52.740 [481/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:52.740 [482/769] Linking static target lib/librte_table.a 00:03:52.740 [483/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:52.998 [484/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:53.300 [485/769] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.558 [486/769] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:53.558 [487/769] Linking target lib/librte_table.so.25.0 00:03:53.558 [488/769] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:53.558 [489/769] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:53.816 [490/769] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:54.074 [491/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:54.074 [492/769] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:54.332 [493/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:54.332 [494/769] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:54.332 [495/769] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:54.594 [496/769] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:54.594 [497/769] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:55.168 [498/769] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:55.168 [499/769] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:55.168 [500/769] Linking static target lib/librte_graph.a 00:03:55.168 [501/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:55.168 [502/769] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:55.168 [503/769] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:55.425 [504/769] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:55.991 [505/769] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:55.991 [506/769] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.991 [507/769] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:55.991 [508/769] Linking target lib/librte_graph.so.25.0 00:03:55.991 [509/769] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:56.250 [510/769] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:56.250 [511/769] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:56.509 [512/769] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:56.509 [513/769] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:56.509 [514/769] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:56.768 [515/769] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:56.768 [516/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:56.768 [517/769] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:57.027 [518/769] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:57.285 [519/769] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:57.285 [520/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:57.285 [521/769] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:57.543 [522/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:57.543 [523/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:57.543 [524/769] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:57.543 [525/769] Linking static target lib/librte_node.a 00:03:57.543 [526/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:57.802 [527/769] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.802 [528/769] Linking target lib/librte_node.so.25.0 00:03:57.802 [529/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:58.060 [530/769] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:58.060 [531/769] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:58.060 [532/769] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:58.060 [533/769] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:58.060 [534/769] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:58.319 [535/769] Linking static target drivers/librte_bus_pci.a 00:03:58.319 [536/769] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:58.319 [537/769] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:58.319 [538/769] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:58.319 [539/769] Linking static target drivers/librte_bus_vdev.a 00:03:58.578 [540/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:58.578 [541/769] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:58.578 [542/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:58.578 [543/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:58.578 [544/769] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.837 [545/769] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.837 [546/769] Linking target drivers/librte_bus_vdev.so.25.0 00:03:58.837 [547/769] Linking target drivers/librte_bus_pci.so.25.0 00:03:58.837 [548/769] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:58.837 [549/769] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:58.837 [550/769] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:58.837 [551/769] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:58.837 [552/769] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:58.837 [553/769] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.095 [554/769] Linking static target drivers/librte_mempool_ring.a 00:03:59.095 [555/769] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.095 [556/769] Linking target drivers/librte_mempool_ring.so.25.0 00:03:59.095 [557/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:59.663 [558/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:59.663 [559/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:59.922 [560/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:59.922 [561/769] Linking static target drivers/net/i40e/base/libi40e_base.a 00:04:00.181 [562/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:00.749 [563/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:04:00.749 [564/769] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:04:00.749 [565/769] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:04:01.316 [566/769] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:04:01.316 [567/769] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:04:01.316 [568/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:04:01.316 [569/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:04:01.574 [570/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:04:01.574 [571/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:04:01.833 [572/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:04:02.400 [573/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:04:02.400 [574/769] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:04:02.400 [575/769] Linking static target drivers/libtmp_rte_power_acpi.a 00:04:02.400 [576/769] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:04:02.400 [577/769] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:04:02.400 [578/769] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:04:02.400 [579/769] Linking static target drivers/librte_power_acpi.a 00:04:02.400 [580/769] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:04:02.400 [581/769] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:04:02.400 [582/769] Linking target drivers/librte_power_acpi.so.25.0 00:04:02.658 [583/769] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:04:02.658 [584/769] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:04:02.658 [585/769] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:04:02.658 [586/769] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:04:02.658 [587/769] Linking static target drivers/librte_power_amd_pstate.a 00:04:02.659 [588/769] Linking static target drivers/libtmp_rte_power_cppc.a 00:04:02.659 [589/769] Linking target drivers/librte_power_amd_pstate.so.25.0 00:04:02.918 [590/769] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:04:02.918 [591/769] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:04:02.918 [592/769] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:04:02.918 [593/769] Linking static target drivers/librte_power_cppc.a 00:04:02.918 [594/769] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:04:02.918 [595/769] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:04:02.918 [596/769] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:04:02.918 [597/769] Linking target drivers/librte_power_cppc.so.25.0 00:04:03.177 [598/769] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:04:03.177 [599/769] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:04:03.177 [600/769] Linking static target drivers/librte_power_kvm_vm.a 00:04:03.177 [601/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:04:03.177 [602/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:03.177 [603/769] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:04:03.177 [604/769] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:04:03.177 [605/769] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:04:03.177 [606/769] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:04:03.177 [607/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:04:03.177 [608/769] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:04:03.460 [609/769] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.460 [610/769] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:04:03.460 [611/769] Linking target drivers/librte_power_kvm_vm.so.25.0 00:04:03.460 [612/769] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:04:03.460 [613/769] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:04:03.460 [614/769] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:04:03.460 [615/769] Linking static target drivers/librte_power_intel_pstate.a 00:04:03.460 [616/769] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:03.460 [617/769] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:04:03.460 [618/769] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:04:03.460 [619/769] Linking static target drivers/librte_power_intel_uncore.a 00:04:03.460 [620/769] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:04:03.460 [621/769] Linking static target lib/librte_vhost.a 00:04:03.460 [622/769] Linking target drivers/librte_power_intel_pstate.so.25.0 00:04:03.718 [623/769] Linking target drivers/librte_power_intel_uncore.so.25.0 00:04:03.976 [624/769] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:04:03.976 [625/769] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:04:04.234 [626/769] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:04:04.234 [627/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:04.234 [628/769] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:04:04.234 [629/769] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:04.494 [630/769] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:04:04.753 [631/769] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:04.753 [632/769] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:04.753 [633/769] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.753 [634/769] Linking static target drivers/librte_net_i40e.a 00:04:04.753 [635/769] Linking target lib/librte_vhost.so.25.0 00:04:04.753 [636/769] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:04:04.753 [637/769] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:04:04.753 [638/769] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:04.753 [639/769] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:04:04.753 [640/769] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:04:04.753 [641/769] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:04:05.011 [642/769] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:04:05.281 [643/769] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:04:05.281 [644/769] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.544 [645/769] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:04:05.544 [646/769] Linking target drivers/librte_net_i40e.so.25.0 00:04:05.544 [647/769] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:04:05.544 [648/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:04:05.544 [649/769] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:04:05.544 [650/769] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:04:05.544 [651/769] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:04:05.544 [652/769] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:04:05.803 [653/769] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:04:06.369 [654/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:04:06.369 [655/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:06.628 [656/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:06.628 [657/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:04:06.628 [658/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:04:06.887 [659/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:06.887 [660/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:06.887 [661/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:07.454 [662/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:04:07.454 [663/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:04:07.454 [664/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:04:07.714 [665/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:04:07.714 [666/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:07.714 [667/769] Linking static target lib/librte_pipeline.a 00:04:07.714 [668/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:04:07.973 [669/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:07.973 [670/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:07.973 [671/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:08.232 [672/769] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:04:08.232 [673/769] Linking target app/dpdk-dumpcap 00:04:08.232 [674/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:04:08.232 [675/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:08.232 [676/769] Linking target app/dpdk-graph 00:04:08.491 [677/769] Linking target app/dpdk-pdump 00:04:08.491 [678/769] Linking target app/dpdk-proc-info 00:04:08.491 [679/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:04:08.749 [680/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:04:08.749 [681/769] Linking target app/dpdk-test-acl 00:04:08.749 [682/769] Linking target app/dpdk-test-cmdline 00:04:08.749 [683/769] Linking target app/dpdk-test-compress-perf 00:04:08.749 [684/769] Linking target app/dpdk-test-crypto-perf 00:04:09.008 [685/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:09.266 [686/769] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:04:09.266 [687/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:09.524 [688/769] Linking target app/dpdk-test-dma-perf 00:04:09.783 [689/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:09.783 [690/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:09.783 [691/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:10.729 [692/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:10.729 [693/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:10.729 [694/769] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.729 [695/769] Linking target lib/librte_pipeline.so.25.0 00:04:10.729 [696/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:10.988 [697/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:10.988 [698/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:10.988 [699/769] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:10.988 [700/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:10.988 [701/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:04:11.247 [702/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:04:11.247 [703/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:11.247 [704/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:11.507 [705/769] Linking target app/dpdk-test-fib 00:04:11.507 [706/769] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:11.507 [707/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:04:11.766 [708/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:04:12.025 [709/769] Linking target app/dpdk-test-gpudev 00:04:12.025 [710/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:04:12.025 [711/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:04:12.025 [712/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:04:12.025 [713/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:04:12.284 [714/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:12.284 [715/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:04:12.543 [716/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:04:12.543 [717/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:04:12.543 [718/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:12.802 [719/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:12.802 [720/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:12.802 [721/769] Linking target app/dpdk-test-flow-perf 00:04:12.802 [722/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:13.060 [723/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:13.061 [724/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:13.319 [725/769] Linking target app/dpdk-test-bbdev 00:04:13.319 [726/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:13.319 [727/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:13.319 [728/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:13.319 [729/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:13.577 [730/769] Linking target app/dpdk-test-eventdev 00:04:13.577 [731/769] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:13.577 [732/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:13.835 [733/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:14.093 [734/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:14.093 [735/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:14.093 [736/769] Linking target app/dpdk-test-mldev 00:04:14.352 [737/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:14.352 [738/769] Linking target app/dpdk-test-pipeline 00:04:14.610 [739/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:14.869 [740/769] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:14.869 [741/769] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:04:15.128 [742/769] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:15.128 [743/769] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:15.128 [744/769] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:15.128 [745/769] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:15.389 [746/769] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:15.648 [747/769] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:15.648 [748/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:15.906 [749/769] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:15.906 [750/769] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:15.906 [751/769] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:16.166 [752/769] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:16.736 [753/769] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:16.736 [754/769] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:16.995 [755/769] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:17.254 [756/769] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:17.254 [757/769] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:17.254 [758/769] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:17.254 [759/769] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:17.254 [760/769] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:17.513 [761/769] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:17.513 [762/769] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:17.513 [763/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:17.513 [764/769] Linking target app/dpdk-test-regex 00:04:17.513 [765/769] Linking target app/dpdk-test-sad 00:04:17.798 [766/769] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:04:18.078 [767/769] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:18.338 [768/769] Linking target app/dpdk-testpmd 00:04:18.338 [769/769] Linking target app/dpdk-test-security-perf 00:04:18.597 11:34:48 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:04:18.597 11:34:48 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:18.597 11:34:48 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:18.597 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:18.597 [0/1] Installing files. 00:04:18.857 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:04:18.857 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.857 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.858 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:18.859 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:18.860 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.120 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:19.121 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:19.122 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:19.122 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.122 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.695 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:04:19.695 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.695 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:19.699 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:04:19.699 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:19.700 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:04:19.700 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:19.700 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:04:19.700 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:04:19.700 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:04:19.700 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:19.700 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:04:19.700 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:19.700 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:04:19.700 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:19.700 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:04:19.700 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:19.700 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:04:19.700 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:19.700 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:04:19.700 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:19.700 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:04:19.700 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:19.700 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:04:19.700 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:19.700 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:04:19.700 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:19.700 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:04:19.700 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:19.700 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:04:19.700 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:19.700 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:04:19.700 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:19.700 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:04:19.700 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:19.700 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:04:19.700 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:19.700 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:04:19.700 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:19.700 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:04:19.700 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:19.700 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:04:19.700 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:19.700 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:04:19.700 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:19.700 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:04:19.700 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:19.700 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:04:19.700 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:19.700 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:04:19.700 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:19.700 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:04:19.700 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:19.700 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:04:19.700 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:19.700 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:04:19.700 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:19.700 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:04:19.700 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:19.700 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:04:19.700 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:19.700 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:04:19.700 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:19.700 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:04:19.700 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:19.700 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:04:19.700 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:19.700 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:04:19.700 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:19.700 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:04:19.700 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:19.700 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:04:19.700 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:19.700 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:04:19.700 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:19.700 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:04:19.700 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:19.700 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:04:19.700 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:19.700 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:04:19.700 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:19.700 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:04:19.700 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:19.700 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:04:19.701 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:19.701 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:04:19.701 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:19.701 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:04:19.701 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:19.701 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:04:19.701 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:19.701 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:04:19.701 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:19.701 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:04:19.701 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:19.701 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:04:19.701 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:19.701 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:04:19.701 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:19.701 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:04:19.701 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:19.701 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:04:19.701 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:19.701 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:04:19.701 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:19.701 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:04:19.701 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:19.701 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:04:19.701 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:19.701 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:04:19.701 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:19.701 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:04:19.701 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:19.701 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:04:19.701 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:19.701 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:04:19.701 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:19.701 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:04:19.701 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:19.701 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:04:19.701 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:19.701 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:04:19.701 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:04:19.701 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:04:19.701 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:04:19.701 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:04:19.701 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:04:19.701 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:04:19.701 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:04:19.701 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:04:19.701 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:04:19.701 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:04:19.701 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:04:19.701 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:04:19.701 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:04:19.701 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:04:19.701 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:04:19.701 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:04:19.701 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:04:19.701 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:04:19.701 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:04:19.701 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:04:19.701 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:04:19.701 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:04:19.701 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:04:19.701 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:04:19.701 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:04:19.701 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:04:19.701 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:04:19.701 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:04:19.701 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:04:19.701 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:04:19.701 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:19.701 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:04:19.702 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:19.702 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:04:19.702 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:04:19.702 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:04:19.702 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:04:19.702 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:04:19.702 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:04:19.702 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:04:19.702 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:04:19.702 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:04:19.702 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:04:19.702 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:04:19.702 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:04:19.702 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:04:19.702 11:34:49 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:04:19.702 ************************************ 00:04:19.702 END TEST build_native_dpdk 00:04:19.702 ************************************ 00:04:19.702 11:34:49 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:19.702 00:04:19.702 real 1m5.574s 00:04:19.702 user 7m56.113s 00:04:19.702 sys 1m19.068s 00:04:19.702 11:34:49 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:19.702 11:34:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:19.702 11:34:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:19.702 11:34:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:19.702 11:34:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:04:19.960 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:19.960 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:19.960 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:19.960 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:20.528 Using 'verbs' RDMA provider 00:04:36.340 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:48.542 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:48.542 Creating mk/config.mk...done. 00:04:48.542 Creating mk/cc.flags.mk...done. 00:04:48.542 Type 'make' to build. 00:04:48.542 11:35:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:48.542 11:35:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:48.542 11:35:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:48.542 11:35:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.542 ************************************ 00:04:48.542 START TEST make 00:04:48.542 ************************************ 00:04:48.542 11:35:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:48.542 make[1]: Nothing to be done for 'all'. 00:05:44.788 CC lib/log/log.o 00:05:44.788 CC lib/log/log_flags.o 00:05:44.788 CC lib/log/log_deprecated.o 00:05:44.788 CC lib/ut/ut.o 00:05:44.788 CC lib/ut_mock/mock.o 00:05:44.788 LIB libspdk_log.a 00:05:44.788 LIB libspdk_ut.a 00:05:44.788 LIB libspdk_ut_mock.a 00:05:44.788 SO libspdk_ut.so.2.0 00:05:44.788 SO libspdk_ut_mock.so.6.0 00:05:44.788 SO libspdk_log.so.7.1 00:05:44.788 SYMLINK libspdk_ut.so 00:05:44.788 SYMLINK libspdk_ut_mock.so 00:05:44.788 SYMLINK libspdk_log.so 00:05:44.788 CC lib/ioat/ioat.o 00:05:44.788 CC lib/util/base64.o 00:05:44.788 CC lib/util/cpuset.o 00:05:44.788 CC lib/util/bit_array.o 00:05:44.788 CC lib/util/crc16.o 00:05:44.788 CC lib/util/crc32.o 00:05:44.788 CC lib/dma/dma.o 00:05:44.788 CC lib/util/crc32c.o 00:05:44.788 CXX lib/trace_parser/trace.o 00:05:44.788 CC lib/vfio_user/host/vfio_user_pci.o 00:05:44.788 CC lib/util/crc32_ieee.o 00:05:44.788 CC lib/util/crc64.o 00:05:44.789 CC lib/vfio_user/host/vfio_user.o 00:05:44.789 CC lib/util/dif.o 00:05:44.789 LIB libspdk_dma.a 00:05:44.789 CC lib/util/fd.o 00:05:44.789 LIB libspdk_ioat.a 00:05:44.789 CC lib/util/fd_group.o 00:05:44.789 SO libspdk_dma.so.5.0 00:05:44.789 SO libspdk_ioat.so.7.0 00:05:44.789 CC lib/util/file.o 00:05:44.789 SYMLINK libspdk_dma.so 00:05:44.789 SYMLINK libspdk_ioat.so 00:05:44.789 CC lib/util/hexlify.o 00:05:44.789 CC lib/util/iov.o 00:05:44.789 CC lib/util/math.o 00:05:44.789 CC lib/util/net.o 00:05:44.789 CC lib/util/pipe.o 00:05:44.789 LIB libspdk_vfio_user.a 00:05:44.789 SO libspdk_vfio_user.so.5.0 00:05:44.789 CC lib/util/strerror_tls.o 00:05:44.789 CC lib/util/string.o 00:05:44.789 SYMLINK libspdk_vfio_user.so 00:05:44.789 CC lib/util/uuid.o 00:05:44.789 CC lib/util/xor.o 00:05:44.789 CC lib/util/zipf.o 00:05:44.789 CC lib/util/md5.o 00:05:44.789 LIB libspdk_util.a 00:05:44.789 SO libspdk_util.so.10.1 00:05:44.789 LIB libspdk_trace_parser.a 00:05:44.789 SO libspdk_trace_parser.so.6.0 00:05:44.789 SYMLINK libspdk_util.so 00:05:44.789 SYMLINK libspdk_trace_parser.so 00:05:44.789 CC lib/rdma_utils/rdma_utils.o 00:05:44.789 CC lib/json/json_util.o 00:05:44.789 CC lib/json/json_parse.o 00:05:44.789 CC lib/json/json_write.o 00:05:44.789 CC lib/idxd/idxd.o 00:05:44.789 CC lib/vmd/led.o 00:05:44.789 CC lib/vmd/vmd.o 00:05:44.789 CC lib/idxd/idxd_user.o 00:05:44.789 CC lib/env_dpdk/env.o 00:05:44.789 CC lib/conf/conf.o 00:05:44.789 CC lib/env_dpdk/memory.o 00:05:44.789 CC lib/env_dpdk/pci.o 00:05:44.789 LIB libspdk_conf.a 00:05:44.789 CC lib/env_dpdk/init.o 00:05:44.789 CC lib/idxd/idxd_kernel.o 00:05:44.789 SO libspdk_conf.so.6.0 00:05:44.789 LIB libspdk_json.a 00:05:44.789 LIB libspdk_rdma_utils.a 00:05:44.789 SYMLINK libspdk_conf.so 00:05:44.789 SO libspdk_json.so.6.0 00:05:44.789 SO libspdk_rdma_utils.so.1.0 00:05:44.789 CC lib/env_dpdk/threads.o 00:05:44.789 SYMLINK libspdk_json.so 00:05:44.789 SYMLINK libspdk_rdma_utils.so 00:05:44.789 CC lib/env_dpdk/pci_ioat.o 00:05:44.789 CC lib/env_dpdk/pci_virtio.o 00:05:44.789 CC lib/jsonrpc/jsonrpc_server.o 00:05:44.789 CC lib/env_dpdk/pci_vmd.o 00:05:44.789 LIB libspdk_idxd.a 00:05:44.789 CC lib/rdma_provider/common.o 00:05:44.789 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:44.789 SO libspdk_idxd.so.12.1 00:05:44.789 LIB libspdk_vmd.a 00:05:44.789 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:44.789 SO libspdk_vmd.so.6.0 00:05:44.789 CC lib/jsonrpc/jsonrpc_client.o 00:05:44.789 SYMLINK libspdk_idxd.so 00:05:44.789 CC lib/env_dpdk/pci_idxd.o 00:05:44.789 SYMLINK libspdk_vmd.so 00:05:44.789 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:44.789 CC lib/env_dpdk/pci_event.o 00:05:44.789 CC lib/env_dpdk/sigbus_handler.o 00:05:44.789 CC lib/env_dpdk/pci_dpdk.o 00:05:44.789 LIB libspdk_rdma_provider.a 00:05:44.789 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:44.789 SO libspdk_rdma_provider.so.7.0 00:05:44.789 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:44.789 SYMLINK libspdk_rdma_provider.so 00:05:44.789 LIB libspdk_jsonrpc.a 00:05:44.789 SO libspdk_jsonrpc.so.6.0 00:05:44.789 SYMLINK libspdk_jsonrpc.so 00:05:44.789 CC lib/rpc/rpc.o 00:05:44.789 LIB libspdk_env_dpdk.a 00:05:44.789 LIB libspdk_rpc.a 00:05:44.789 SO libspdk_env_dpdk.so.15.1 00:05:44.789 SO libspdk_rpc.so.6.0 00:05:44.789 SYMLINK libspdk_rpc.so 00:05:44.789 SYMLINK libspdk_env_dpdk.so 00:05:44.789 CC lib/notify/notify.o 00:05:44.789 CC lib/notify/notify_rpc.o 00:05:44.789 CC lib/trace/trace.o 00:05:44.789 CC lib/trace/trace_rpc.o 00:05:44.789 CC lib/trace/trace_flags.o 00:05:44.789 CC lib/keyring/keyring.o 00:05:44.789 CC lib/keyring/keyring_rpc.o 00:05:44.789 LIB libspdk_notify.a 00:05:44.789 SO libspdk_notify.so.6.0 00:05:44.789 LIB libspdk_trace.a 00:05:44.789 LIB libspdk_keyring.a 00:05:44.789 SYMLINK libspdk_notify.so 00:05:44.789 SO libspdk_trace.so.11.0 00:05:44.789 SO libspdk_keyring.so.2.0 00:05:44.789 SYMLINK libspdk_trace.so 00:05:44.789 SYMLINK libspdk_keyring.so 00:05:44.789 CC lib/sock/sock_rpc.o 00:05:44.789 CC lib/sock/sock.o 00:05:44.789 CC lib/thread/thread.o 00:05:44.789 CC lib/thread/iobuf.o 00:05:44.789 LIB libspdk_sock.a 00:05:44.789 SO libspdk_sock.so.10.0 00:05:44.789 SYMLINK libspdk_sock.so 00:05:44.789 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:44.789 CC lib/nvme/nvme_ctrlr.o 00:05:44.789 CC lib/nvme/nvme_ns.o 00:05:44.789 CC lib/nvme/nvme_ns_cmd.o 00:05:44.789 CC lib/nvme/nvme_fabric.o 00:05:44.789 CC lib/nvme/nvme_pcie_common.o 00:05:44.789 CC lib/nvme/nvme_qpair.o 00:05:44.789 CC lib/nvme/nvme.o 00:05:44.789 CC lib/nvme/nvme_pcie.o 00:05:44.789 LIB libspdk_thread.a 00:05:44.789 SO libspdk_thread.so.11.0 00:05:45.047 SYMLINK libspdk_thread.so 00:05:45.047 CC lib/nvme/nvme_quirks.o 00:05:45.047 CC lib/nvme/nvme_transport.o 00:05:45.047 CC lib/nvme/nvme_discovery.o 00:05:45.047 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:45.047 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:45.047 CC lib/nvme/nvme_tcp.o 00:05:45.305 CC lib/nvme/nvme_opal.o 00:05:45.305 CC lib/nvme/nvme_io_msg.o 00:05:45.305 CC lib/nvme/nvme_poll_group.o 00:05:45.563 CC lib/nvme/nvme_zns.o 00:05:45.563 CC lib/nvme/nvme_stubs.o 00:05:45.820 CC lib/nvme/nvme_auth.o 00:05:45.820 CC lib/nvme/nvme_cuse.o 00:05:45.820 CC lib/accel/accel.o 00:05:45.820 CC lib/accel/accel_rpc.o 00:05:46.078 CC lib/blob/blobstore.o 00:05:46.078 CC lib/nvme/nvme_rdma.o 00:05:46.078 CC lib/accel/accel_sw.o 00:05:46.337 CC lib/init/json_config.o 00:05:46.337 CC lib/init/subsystem.o 00:05:46.337 CC lib/virtio/virtio.o 00:05:46.595 CC lib/virtio/virtio_vhost_user.o 00:05:46.595 CC lib/virtio/virtio_vfio_user.o 00:05:46.595 CC lib/init/subsystem_rpc.o 00:05:46.595 CC lib/virtio/virtio_pci.o 00:05:46.853 CC lib/blob/request.o 00:05:46.853 CC lib/blob/zeroes.o 00:05:46.853 CC lib/init/rpc.o 00:05:46.853 CC lib/blob/blob_bs_dev.o 00:05:46.853 CC lib/fsdev/fsdev.o 00:05:46.853 CC lib/fsdev/fsdev_io.o 00:05:46.853 CC lib/fsdev/fsdev_rpc.o 00:05:46.853 LIB libspdk_accel.a 00:05:46.853 LIB libspdk_init.a 00:05:47.112 LIB libspdk_virtio.a 00:05:47.112 SO libspdk_accel.so.16.0 00:05:47.112 SO libspdk_init.so.6.0 00:05:47.112 SO libspdk_virtio.so.7.0 00:05:47.112 SYMLINK libspdk_init.so 00:05:47.112 SYMLINK libspdk_accel.so 00:05:47.112 SYMLINK libspdk_virtio.so 00:05:47.396 CC lib/event/app.o 00:05:47.396 CC lib/event/reactor.o 00:05:47.396 CC lib/event/app_rpc.o 00:05:47.396 CC lib/event/scheduler_static.o 00:05:47.396 CC lib/event/log_rpc.o 00:05:47.396 CC lib/bdev/bdev.o 00:05:47.396 CC lib/bdev/bdev_rpc.o 00:05:47.396 CC lib/bdev/bdev_zone.o 00:05:47.396 CC lib/bdev/part.o 00:05:47.396 LIB libspdk_nvme.a 00:05:47.655 CC lib/bdev/scsi_nvme.o 00:05:47.655 LIB libspdk_fsdev.a 00:05:47.655 SO libspdk_fsdev.so.2.0 00:05:47.655 SO libspdk_nvme.so.15.0 00:05:47.655 SYMLINK libspdk_fsdev.so 00:05:47.655 LIB libspdk_event.a 00:05:47.914 SO libspdk_event.so.14.0 00:05:47.914 SYMLINK libspdk_event.so 00:05:47.914 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:47.914 SYMLINK libspdk_nvme.so 00:05:48.482 LIB libspdk_fuse_dispatcher.a 00:05:48.740 SO libspdk_fuse_dispatcher.so.1.0 00:05:48.740 SYMLINK libspdk_fuse_dispatcher.so 00:05:49.000 LIB libspdk_blob.a 00:05:49.000 SO libspdk_blob.so.12.0 00:05:49.259 SYMLINK libspdk_blob.so 00:05:49.518 CC lib/lvol/lvol.o 00:05:49.518 CC lib/blobfs/blobfs.o 00:05:49.518 CC lib/blobfs/tree.o 00:05:50.085 LIB libspdk_bdev.a 00:05:50.085 SO libspdk_bdev.so.17.0 00:05:50.344 LIB libspdk_blobfs.a 00:05:50.344 SYMLINK libspdk_bdev.so 00:05:50.344 SO libspdk_blobfs.so.11.0 00:05:50.344 LIB libspdk_lvol.a 00:05:50.344 SYMLINK libspdk_blobfs.so 00:05:50.344 SO libspdk_lvol.so.11.0 00:05:50.603 SYMLINK libspdk_lvol.so 00:05:50.603 CC lib/ublk/ublk.o 00:05:50.603 CC lib/nbd/nbd.o 00:05:50.603 CC lib/nbd/nbd_rpc.o 00:05:50.603 CC lib/ublk/ublk_rpc.o 00:05:50.603 CC lib/nvmf/ctrlr_discovery.o 00:05:50.603 CC lib/nvmf/ctrlr.o 00:05:50.603 CC lib/nvmf/ctrlr_bdev.o 00:05:50.603 CC lib/nvmf/subsystem.o 00:05:50.603 CC lib/scsi/dev.o 00:05:50.603 CC lib/ftl/ftl_core.o 00:05:50.603 CC lib/ftl/ftl_init.o 00:05:50.603 CC lib/scsi/lun.o 00:05:50.862 CC lib/nvmf/nvmf.o 00:05:50.862 CC lib/ftl/ftl_layout.o 00:05:50.862 CC lib/nvmf/nvmf_rpc.o 00:05:50.862 LIB libspdk_nbd.a 00:05:51.120 CC lib/scsi/port.o 00:05:51.120 SO libspdk_nbd.so.7.0 00:05:51.120 CC lib/scsi/scsi.o 00:05:51.120 SYMLINK libspdk_nbd.so 00:05:51.120 CC lib/scsi/scsi_bdev.o 00:05:51.120 LIB libspdk_ublk.a 00:05:51.120 CC lib/nvmf/transport.o 00:05:51.120 SO libspdk_ublk.so.3.0 00:05:51.120 CC lib/ftl/ftl_debug.o 00:05:51.120 CC lib/ftl/ftl_io.o 00:05:51.379 CC lib/nvmf/tcp.o 00:05:51.379 SYMLINK libspdk_ublk.so 00:05:51.379 CC lib/nvmf/stubs.o 00:05:51.379 CC lib/ftl/ftl_sb.o 00:05:51.379 CC lib/ftl/ftl_l2p.o 00:05:51.637 CC lib/scsi/scsi_pr.o 00:05:51.637 CC lib/scsi/scsi_rpc.o 00:05:51.637 CC lib/nvmf/mdns_server.o 00:05:51.637 CC lib/ftl/ftl_l2p_flat.o 00:05:51.637 CC lib/ftl/ftl_nv_cache.o 00:05:51.637 CC lib/nvmf/rdma.o 00:05:51.896 CC lib/nvmf/auth.o 00:05:51.896 CC lib/scsi/task.o 00:05:51.896 CC lib/ftl/ftl_band.o 00:05:51.896 CC lib/ftl/ftl_band_ops.o 00:05:51.896 CC lib/ftl/ftl_writer.o 00:05:52.155 LIB libspdk_scsi.a 00:05:52.155 SO libspdk_scsi.so.9.0 00:05:52.155 CC lib/ftl/ftl_rq.o 00:05:52.155 CC lib/ftl/ftl_reloc.o 00:05:52.155 SYMLINK libspdk_scsi.so 00:05:52.155 CC lib/ftl/ftl_l2p_cache.o 00:05:52.155 CC lib/ftl/ftl_p2l.o 00:05:52.413 CC lib/ftl/ftl_p2l_log.o 00:05:52.413 CC lib/ftl/mngt/ftl_mngt.o 00:05:52.413 CC lib/iscsi/conn.o 00:05:52.672 CC lib/iscsi/init_grp.o 00:05:52.672 CC lib/iscsi/iscsi.o 00:05:52.672 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:52.672 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:52.672 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:52.672 CC lib/vhost/vhost.o 00:05:52.672 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:52.672 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:52.930 CC lib/iscsi/param.o 00:05:52.930 CC lib/iscsi/portal_grp.o 00:05:52.930 CC lib/iscsi/tgt_node.o 00:05:52.930 CC lib/iscsi/iscsi_subsystem.o 00:05:52.930 CC lib/iscsi/iscsi_rpc.o 00:05:53.188 CC lib/vhost/vhost_rpc.o 00:05:53.188 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:53.188 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:53.188 CC lib/iscsi/task.o 00:05:53.446 CC lib/vhost/vhost_scsi.o 00:05:53.446 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:53.446 CC lib/vhost/vhost_blk.o 00:05:53.446 CC lib/vhost/rte_vhost_user.o 00:05:53.446 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:53.446 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:53.705 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:53.705 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:53.705 CC lib/ftl/utils/ftl_conf.o 00:05:53.705 CC lib/ftl/utils/ftl_md.o 00:05:53.705 CC lib/ftl/utils/ftl_mempool.o 00:05:53.705 LIB libspdk_nvmf.a 00:05:53.964 CC lib/ftl/utils/ftl_bitmap.o 00:05:53.964 CC lib/ftl/utils/ftl_property.o 00:05:53.964 LIB libspdk_iscsi.a 00:05:53.964 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:53.964 SO libspdk_nvmf.so.20.0 00:05:53.964 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:53.964 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:54.222 SO libspdk_iscsi.so.8.0 00:05:54.222 SYMLINK libspdk_nvmf.so 00:05:54.222 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:54.222 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:54.222 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:54.222 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:54.222 SYMLINK libspdk_iscsi.so 00:05:54.222 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:54.222 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:54.222 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:54.481 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:54.481 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:54.481 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:54.481 CC lib/ftl/base/ftl_base_dev.o 00:05:54.481 CC lib/ftl/base/ftl_base_bdev.o 00:05:54.481 CC lib/ftl/ftl_trace.o 00:05:54.481 LIB libspdk_vhost.a 00:05:54.481 SO libspdk_vhost.so.8.0 00:05:54.740 SYMLINK libspdk_vhost.so 00:05:54.740 LIB libspdk_ftl.a 00:05:54.998 SO libspdk_ftl.so.9.0 00:05:55.257 SYMLINK libspdk_ftl.so 00:05:55.824 CC module/env_dpdk/env_dpdk_rpc.o 00:05:55.824 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:55.824 CC module/accel/ioat/accel_ioat.o 00:05:55.824 CC module/blob/bdev/blob_bdev.o 00:05:55.824 CC module/keyring/file/keyring.o 00:05:55.824 CC module/accel/dsa/accel_dsa.o 00:05:55.824 CC module/fsdev/aio/fsdev_aio.o 00:05:55.824 CC module/accel/error/accel_error.o 00:05:55.824 CC module/keyring/linux/keyring.o 00:05:55.824 CC module/sock/posix/posix.o 00:05:55.824 LIB libspdk_env_dpdk_rpc.a 00:05:55.824 SO libspdk_env_dpdk_rpc.so.6.0 00:05:55.824 SYMLINK libspdk_env_dpdk_rpc.so 00:05:55.824 CC module/keyring/file/keyring_rpc.o 00:05:55.824 CC module/keyring/linux/keyring_rpc.o 00:05:55.824 CC module/accel/ioat/accel_ioat_rpc.o 00:05:55.824 CC module/accel/error/accel_error_rpc.o 00:05:56.083 LIB libspdk_scheduler_dynamic.a 00:05:56.083 SO libspdk_scheduler_dynamic.so.4.0 00:05:56.083 LIB libspdk_blob_bdev.a 00:05:56.083 SO libspdk_blob_bdev.so.12.0 00:05:56.083 LIB libspdk_keyring_linux.a 00:05:56.083 LIB libspdk_keyring_file.a 00:05:56.083 CC module/accel/dsa/accel_dsa_rpc.o 00:05:56.083 SYMLINK libspdk_scheduler_dynamic.so 00:05:56.083 LIB libspdk_accel_ioat.a 00:05:56.083 SO libspdk_keyring_file.so.2.0 00:05:56.083 SO libspdk_keyring_linux.so.1.0 00:05:56.083 SO libspdk_accel_ioat.so.6.0 00:05:56.083 LIB libspdk_accel_error.a 00:05:56.083 SYMLINK libspdk_blob_bdev.so 00:05:56.083 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:56.083 SO libspdk_accel_error.so.2.0 00:05:56.083 SYMLINK libspdk_keyring_file.so 00:05:56.083 SYMLINK libspdk_keyring_linux.so 00:05:56.083 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:56.083 SYMLINK libspdk_accel_ioat.so 00:05:56.083 CC module/fsdev/aio/linux_aio_mgr.o 00:05:56.342 SYMLINK libspdk_accel_error.so 00:05:56.342 LIB libspdk_accel_dsa.a 00:05:56.342 SO libspdk_accel_dsa.so.5.0 00:05:56.342 CC module/accel/iaa/accel_iaa.o 00:05:56.342 LIB libspdk_scheduler_dpdk_governor.a 00:05:56.342 CC module/sock/uring/uring.o 00:05:56.342 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:56.342 SYMLINK libspdk_accel_dsa.so 00:05:56.342 CC module/accel/iaa/accel_iaa_rpc.o 00:05:56.342 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:56.342 LIB libspdk_fsdev_aio.a 00:05:56.342 CC module/scheduler/gscheduler/gscheduler.o 00:05:56.342 SO libspdk_fsdev_aio.so.1.0 00:05:56.342 LIB libspdk_sock_posix.a 00:05:56.342 CC module/bdev/delay/vbdev_delay.o 00:05:56.602 SO libspdk_sock_posix.so.6.0 00:05:56.602 SYMLINK libspdk_fsdev_aio.so 00:05:56.602 LIB libspdk_accel_iaa.a 00:05:56.602 CC module/bdev/error/vbdev_error.o 00:05:56.602 CC module/bdev/gpt/gpt.o 00:05:56.602 SO libspdk_accel_iaa.so.3.0 00:05:56.602 SYMLINK libspdk_sock_posix.so 00:05:56.602 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:56.602 LIB libspdk_scheduler_gscheduler.a 00:05:56.602 CC module/blobfs/bdev/blobfs_bdev.o 00:05:56.602 SO libspdk_scheduler_gscheduler.so.4.0 00:05:56.602 SYMLINK libspdk_accel_iaa.so 00:05:56.602 SYMLINK libspdk_scheduler_gscheduler.so 00:05:56.602 CC module/bdev/malloc/bdev_malloc.o 00:05:56.602 CC module/bdev/lvol/vbdev_lvol.o 00:05:56.860 CC module/bdev/gpt/vbdev_gpt.o 00:05:56.860 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:56.860 CC module/bdev/null/bdev_null.o 00:05:56.860 LIB libspdk_bdev_delay.a 00:05:56.860 CC module/bdev/error/vbdev_error_rpc.o 00:05:56.860 CC module/bdev/nvme/bdev_nvme.o 00:05:56.860 SO libspdk_bdev_delay.so.6.0 00:05:56.860 CC module/bdev/passthru/vbdev_passthru.o 00:05:56.860 SYMLINK libspdk_bdev_delay.so 00:05:56.860 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:57.119 LIB libspdk_blobfs_bdev.a 00:05:57.119 LIB libspdk_sock_uring.a 00:05:57.119 SO libspdk_blobfs_bdev.so.6.0 00:05:57.119 LIB libspdk_bdev_error.a 00:05:57.119 SO libspdk_sock_uring.so.5.0 00:05:57.119 SO libspdk_bdev_error.so.6.0 00:05:57.119 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:57.119 CC module/bdev/null/bdev_null_rpc.o 00:05:57.119 SYMLINK libspdk_blobfs_bdev.so 00:05:57.119 SYMLINK libspdk_sock_uring.so 00:05:57.119 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:57.119 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:57.119 SYMLINK libspdk_bdev_error.so 00:05:57.119 CC module/bdev/nvme/nvme_rpc.o 00:05:57.119 LIB libspdk_bdev_gpt.a 00:05:57.119 CC module/bdev/nvme/bdev_mdns_client.o 00:05:57.119 SO libspdk_bdev_gpt.so.6.0 00:05:57.389 SYMLINK libspdk_bdev_gpt.so 00:05:57.389 LIB libspdk_bdev_passthru.a 00:05:57.389 LIB libspdk_bdev_malloc.a 00:05:57.389 SO libspdk_bdev_passthru.so.6.0 00:05:57.389 LIB libspdk_bdev_null.a 00:05:57.389 SO libspdk_bdev_malloc.so.6.0 00:05:57.389 SO libspdk_bdev_null.so.6.0 00:05:57.389 SYMLINK libspdk_bdev_passthru.so 00:05:57.389 SYMLINK libspdk_bdev_malloc.so 00:05:57.389 SYMLINK libspdk_bdev_null.so 00:05:57.389 CC module/bdev/split/vbdev_split.o 00:05:57.389 CC module/bdev/raid/bdev_raid.o 00:05:57.389 CC module/bdev/nvme/vbdev_opal.o 00:05:57.663 LIB libspdk_bdev_lvol.a 00:05:57.663 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:57.663 SO libspdk_bdev_lvol.so.6.0 00:05:57.663 CC module/bdev/uring/bdev_uring.o 00:05:57.663 CC module/bdev/aio/bdev_aio.o 00:05:57.663 CC module/bdev/ftl/bdev_ftl.o 00:05:57.663 SYMLINK libspdk_bdev_lvol.so 00:05:57.663 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:57.663 CC module/bdev/split/vbdev_split_rpc.o 00:05:57.921 CC module/bdev/uring/bdev_uring_rpc.o 00:05:57.921 LIB libspdk_bdev_split.a 00:05:57.921 CC module/bdev/iscsi/bdev_iscsi.o 00:05:57.921 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:57.921 SO libspdk_bdev_split.so.6.0 00:05:57.921 CC module/bdev/aio/bdev_aio_rpc.o 00:05:57.921 LIB libspdk_bdev_ftl.a 00:05:57.921 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:57.921 SO libspdk_bdev_ftl.so.6.0 00:05:57.921 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:58.180 SYMLINK libspdk_bdev_ftl.so 00:05:58.180 SYMLINK libspdk_bdev_split.so 00:05:58.180 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:58.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:58.180 LIB libspdk_bdev_uring.a 00:05:58.180 LIB libspdk_bdev_aio.a 00:05:58.180 SO libspdk_bdev_uring.so.6.0 00:05:58.180 LIB libspdk_bdev_zone_block.a 00:05:58.180 SO libspdk_bdev_zone_block.so.6.0 00:05:58.180 SO libspdk_bdev_aio.so.6.0 00:05:58.180 SYMLINK libspdk_bdev_uring.so 00:05:58.180 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:58.180 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:58.180 SYMLINK libspdk_bdev_zone_block.so 00:05:58.180 LIB libspdk_bdev_iscsi.a 00:05:58.180 SYMLINK libspdk_bdev_aio.so 00:05:58.180 CC module/bdev/raid/bdev_raid_rpc.o 00:05:58.180 CC module/bdev/raid/bdev_raid_sb.o 00:05:58.439 CC module/bdev/raid/raid0.o 00:05:58.439 SO libspdk_bdev_iscsi.so.6.0 00:05:58.439 CC module/bdev/raid/raid1.o 00:05:58.439 SYMLINK libspdk_bdev_iscsi.so 00:05:58.439 CC module/bdev/raid/concat.o 00:05:58.439 LIB libspdk_bdev_virtio.a 00:05:58.439 SO libspdk_bdev_virtio.so.6.0 00:05:58.697 SYMLINK libspdk_bdev_virtio.so 00:05:58.697 LIB libspdk_bdev_raid.a 00:05:58.697 SO libspdk_bdev_raid.so.6.0 00:05:58.697 SYMLINK libspdk_bdev_raid.so 00:05:59.632 LIB libspdk_bdev_nvme.a 00:05:59.632 SO libspdk_bdev_nvme.so.7.1 00:05:59.891 SYMLINK libspdk_bdev_nvme.so 00:06:00.458 CC module/event/subsystems/vmd/vmd.o 00:06:00.458 CC module/event/subsystems/iobuf/iobuf.o 00:06:00.458 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:00.458 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:00.458 CC module/event/subsystems/sock/sock.o 00:06:00.458 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:00.458 CC module/event/subsystems/scheduler/scheduler.o 00:06:00.458 CC module/event/subsystems/fsdev/fsdev.o 00:06:00.458 CC module/event/subsystems/keyring/keyring.o 00:06:00.458 LIB libspdk_event_sock.a 00:06:00.458 LIB libspdk_event_vhost_blk.a 00:06:00.458 LIB libspdk_event_vmd.a 00:06:00.458 LIB libspdk_event_scheduler.a 00:06:00.458 LIB libspdk_event_fsdev.a 00:06:00.458 LIB libspdk_event_iobuf.a 00:06:00.458 LIB libspdk_event_keyring.a 00:06:00.458 SO libspdk_event_sock.so.5.0 00:06:00.458 SO libspdk_event_vhost_blk.so.3.0 00:06:00.458 SO libspdk_event_scheduler.so.4.0 00:06:00.458 SO libspdk_event_fsdev.so.1.0 00:06:00.458 SO libspdk_event_vmd.so.6.0 00:06:00.458 SO libspdk_event_iobuf.so.3.0 00:06:00.458 SO libspdk_event_keyring.so.1.0 00:06:00.458 SYMLINK libspdk_event_vhost_blk.so 00:06:00.458 SYMLINK libspdk_event_sock.so 00:06:00.458 SYMLINK libspdk_event_scheduler.so 00:06:00.458 SYMLINK libspdk_event_fsdev.so 00:06:00.458 SYMLINK libspdk_event_keyring.so 00:06:00.458 SYMLINK libspdk_event_vmd.so 00:06:00.716 SYMLINK libspdk_event_iobuf.so 00:06:00.975 CC module/event/subsystems/accel/accel.o 00:06:00.975 LIB libspdk_event_accel.a 00:06:01.234 SO libspdk_event_accel.so.6.0 00:06:01.234 SYMLINK libspdk_event_accel.so 00:06:01.493 CC module/event/subsystems/bdev/bdev.o 00:06:01.752 LIB libspdk_event_bdev.a 00:06:01.752 SO libspdk_event_bdev.so.6.0 00:06:01.752 SYMLINK libspdk_event_bdev.so 00:06:02.011 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:02.011 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:02.011 CC module/event/subsystems/ublk/ublk.o 00:06:02.011 CC module/event/subsystems/scsi/scsi.o 00:06:02.011 CC module/event/subsystems/nbd/nbd.o 00:06:02.269 LIB libspdk_event_ublk.a 00:06:02.269 LIB libspdk_event_nbd.a 00:06:02.269 LIB libspdk_event_scsi.a 00:06:02.269 SO libspdk_event_ublk.so.3.0 00:06:02.269 SO libspdk_event_nbd.so.6.0 00:06:02.269 SO libspdk_event_scsi.so.6.0 00:06:02.270 SYMLINK libspdk_event_ublk.so 00:06:02.270 SYMLINK libspdk_event_nbd.so 00:06:02.270 LIB libspdk_event_nvmf.a 00:06:02.270 SYMLINK libspdk_event_scsi.so 00:06:02.528 SO libspdk_event_nvmf.so.6.0 00:06:02.528 SYMLINK libspdk_event_nvmf.so 00:06:02.528 CC module/event/subsystems/iscsi/iscsi.o 00:06:02.528 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:02.788 LIB libspdk_event_vhost_scsi.a 00:06:02.788 LIB libspdk_event_iscsi.a 00:06:02.788 SO libspdk_event_vhost_scsi.so.3.0 00:06:02.788 SO libspdk_event_iscsi.so.6.0 00:06:03.047 SYMLINK libspdk_event_vhost_scsi.so 00:06:03.047 SYMLINK libspdk_event_iscsi.so 00:06:03.047 SO libspdk.so.6.0 00:06:03.047 SYMLINK libspdk.so 00:06:03.306 CC app/trace_record/trace_record.o 00:06:03.306 CXX app/trace/trace.o 00:06:03.306 TEST_HEADER include/spdk/accel.h 00:06:03.306 TEST_HEADER include/spdk/accel_module.h 00:06:03.306 TEST_HEADER include/spdk/assert.h 00:06:03.306 TEST_HEADER include/spdk/barrier.h 00:06:03.306 TEST_HEADER include/spdk/base64.h 00:06:03.306 TEST_HEADER include/spdk/bdev.h 00:06:03.306 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:03.565 TEST_HEADER include/spdk/bdev_module.h 00:06:03.565 TEST_HEADER include/spdk/bdev_zone.h 00:06:03.565 TEST_HEADER include/spdk/bit_array.h 00:06:03.565 TEST_HEADER include/spdk/bit_pool.h 00:06:03.565 TEST_HEADER include/spdk/blob_bdev.h 00:06:03.565 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:03.565 TEST_HEADER include/spdk/blobfs.h 00:06:03.565 TEST_HEADER include/spdk/blob.h 00:06:03.565 TEST_HEADER include/spdk/conf.h 00:06:03.565 TEST_HEADER include/spdk/config.h 00:06:03.565 CC app/nvmf_tgt/nvmf_main.o 00:06:03.565 TEST_HEADER include/spdk/cpuset.h 00:06:03.565 TEST_HEADER include/spdk/crc16.h 00:06:03.565 TEST_HEADER include/spdk/crc32.h 00:06:03.565 TEST_HEADER include/spdk/crc64.h 00:06:03.565 TEST_HEADER include/spdk/dif.h 00:06:03.565 TEST_HEADER include/spdk/dma.h 00:06:03.565 TEST_HEADER include/spdk/endian.h 00:06:03.565 TEST_HEADER include/spdk/env_dpdk.h 00:06:03.565 TEST_HEADER include/spdk/env.h 00:06:03.565 TEST_HEADER include/spdk/event.h 00:06:03.565 TEST_HEADER include/spdk/fd_group.h 00:06:03.565 TEST_HEADER include/spdk/fd.h 00:06:03.565 TEST_HEADER include/spdk/file.h 00:06:03.565 TEST_HEADER include/spdk/fsdev.h 00:06:03.565 CC examples/ioat/perf/perf.o 00:06:03.565 TEST_HEADER include/spdk/fsdev_module.h 00:06:03.565 TEST_HEADER include/spdk/ftl.h 00:06:03.565 CC examples/util/zipf/zipf.o 00:06:03.565 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:03.565 TEST_HEADER include/spdk/gpt_spec.h 00:06:03.565 TEST_HEADER include/spdk/hexlify.h 00:06:03.565 TEST_HEADER include/spdk/histogram_data.h 00:06:03.565 TEST_HEADER include/spdk/idxd.h 00:06:03.565 TEST_HEADER include/spdk/idxd_spec.h 00:06:03.565 CC test/thread/poller_perf/poller_perf.o 00:06:03.565 TEST_HEADER include/spdk/init.h 00:06:03.565 TEST_HEADER include/spdk/ioat.h 00:06:03.565 TEST_HEADER include/spdk/ioat_spec.h 00:06:03.565 TEST_HEADER include/spdk/iscsi_spec.h 00:06:03.565 TEST_HEADER include/spdk/json.h 00:06:03.565 TEST_HEADER include/spdk/jsonrpc.h 00:06:03.565 TEST_HEADER include/spdk/keyring.h 00:06:03.565 TEST_HEADER include/spdk/keyring_module.h 00:06:03.565 TEST_HEADER include/spdk/likely.h 00:06:03.565 TEST_HEADER include/spdk/log.h 00:06:03.565 CC test/dma/test_dma/test_dma.o 00:06:03.565 TEST_HEADER include/spdk/lvol.h 00:06:03.565 TEST_HEADER include/spdk/md5.h 00:06:03.565 TEST_HEADER include/spdk/memory.h 00:06:03.565 TEST_HEADER include/spdk/mmio.h 00:06:03.565 TEST_HEADER include/spdk/nbd.h 00:06:03.565 TEST_HEADER include/spdk/net.h 00:06:03.565 TEST_HEADER include/spdk/notify.h 00:06:03.565 TEST_HEADER include/spdk/nvme.h 00:06:03.565 CC test/app/bdev_svc/bdev_svc.o 00:06:03.565 TEST_HEADER include/spdk/nvme_intel.h 00:06:03.565 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:03.565 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:03.565 TEST_HEADER include/spdk/nvme_spec.h 00:06:03.565 TEST_HEADER include/spdk/nvme_zns.h 00:06:03.565 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:03.565 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:03.565 TEST_HEADER include/spdk/nvmf.h 00:06:03.565 TEST_HEADER include/spdk/nvmf_spec.h 00:06:03.565 TEST_HEADER include/spdk/nvmf_transport.h 00:06:03.565 TEST_HEADER include/spdk/opal.h 00:06:03.565 TEST_HEADER include/spdk/opal_spec.h 00:06:03.565 TEST_HEADER include/spdk/pci_ids.h 00:06:03.565 TEST_HEADER include/spdk/pipe.h 00:06:03.565 TEST_HEADER include/spdk/queue.h 00:06:03.565 TEST_HEADER include/spdk/reduce.h 00:06:03.565 TEST_HEADER include/spdk/rpc.h 00:06:03.565 LINK nvmf_tgt 00:06:03.565 TEST_HEADER include/spdk/scheduler.h 00:06:03.565 TEST_HEADER include/spdk/scsi.h 00:06:03.565 TEST_HEADER include/spdk/scsi_spec.h 00:06:03.565 TEST_HEADER include/spdk/sock.h 00:06:03.824 TEST_HEADER include/spdk/stdinc.h 00:06:03.824 TEST_HEADER include/spdk/string.h 00:06:03.824 TEST_HEADER include/spdk/thread.h 00:06:03.824 TEST_HEADER include/spdk/trace.h 00:06:03.824 LINK interrupt_tgt 00:06:03.824 TEST_HEADER include/spdk/trace_parser.h 00:06:03.824 TEST_HEADER include/spdk/tree.h 00:06:03.824 TEST_HEADER include/spdk/ublk.h 00:06:03.824 TEST_HEADER include/spdk/util.h 00:06:03.824 TEST_HEADER include/spdk/uuid.h 00:06:03.824 TEST_HEADER include/spdk/version.h 00:06:03.824 LINK spdk_trace_record 00:06:03.824 LINK zipf 00:06:03.824 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:03.824 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:03.824 TEST_HEADER include/spdk/vhost.h 00:06:03.824 TEST_HEADER include/spdk/vmd.h 00:06:03.824 TEST_HEADER include/spdk/xor.h 00:06:03.825 TEST_HEADER include/spdk/zipf.h 00:06:03.825 CXX test/cpp_headers/accel.o 00:06:03.825 LINK poller_perf 00:06:03.825 LINK ioat_perf 00:06:03.825 LINK bdev_svc 00:06:03.825 LINK spdk_trace 00:06:03.825 CXX test/cpp_headers/accel_module.o 00:06:04.083 CC test/app/histogram_perf/histogram_perf.o 00:06:04.083 CC examples/ioat/verify/verify.o 00:06:04.083 CC app/iscsi_tgt/iscsi_tgt.o 00:06:04.083 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:04.083 CXX test/cpp_headers/assert.o 00:06:04.083 LINK test_dma 00:06:04.083 CC test/env/vtophys/vtophys.o 00:06:04.083 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:04.083 CC examples/thread/thread/thread_ex.o 00:06:04.341 CC test/env/mem_callbacks/mem_callbacks.o 00:06:04.341 LINK histogram_perf 00:06:04.341 CXX test/cpp_headers/barrier.o 00:06:04.341 LINK verify 00:06:04.341 LINK iscsi_tgt 00:06:04.342 LINK vtophys 00:06:04.342 LINK env_dpdk_post_init 00:06:04.600 CC test/env/memory/memory_ut.o 00:06:04.600 CXX test/cpp_headers/base64.o 00:06:04.600 LINK thread 00:06:04.600 LINK nvme_fuzz 00:06:04.600 CC test/env/pci/pci_ut.o 00:06:04.600 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:04.600 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:04.600 CXX test/cpp_headers/bdev.o 00:06:04.600 CC app/spdk_lspci/spdk_lspci.o 00:06:04.600 CC app/spdk_tgt/spdk_tgt.o 00:06:04.858 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:04.858 LINK spdk_lspci 00:06:04.858 CXX test/cpp_headers/bdev_module.o 00:06:04.858 CC test/app/jsoncat/jsoncat.o 00:06:04.858 CC examples/sock/hello_world/hello_sock.o 00:06:04.858 LINK mem_callbacks 00:06:04.858 LINK spdk_tgt 00:06:04.858 LINK pci_ut 00:06:04.858 CXX test/cpp_headers/bdev_zone.o 00:06:05.119 LINK jsoncat 00:06:05.119 CC test/app/stub/stub.o 00:06:05.119 LINK hello_sock 00:06:05.119 CXX test/cpp_headers/bit_array.o 00:06:05.119 LINK vhost_fuzz 00:06:05.377 CC examples/vmd/lsvmd/lsvmd.o 00:06:05.377 CC app/spdk_nvme_perf/perf.o 00:06:05.377 CC app/spdk_nvme_identify/identify.o 00:06:05.377 CC examples/vmd/led/led.o 00:06:05.377 LINK stub 00:06:05.377 CXX test/cpp_headers/bit_pool.o 00:06:05.377 LINK lsvmd 00:06:05.377 LINK led 00:06:05.635 CC examples/idxd/perf/perf.o 00:06:05.635 CXX test/cpp_headers/blob_bdev.o 00:06:05.635 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:05.635 CC test/event/event_perf/event_perf.o 00:06:05.635 CC app/spdk_nvme_discover/discovery_aer.o 00:06:05.635 LINK memory_ut 00:06:05.894 CXX test/cpp_headers/blobfs_bdev.o 00:06:05.894 CC examples/accel/perf/accel_perf.o 00:06:05.894 LINK event_perf 00:06:05.894 LINK idxd_perf 00:06:05.894 LINK hello_fsdev 00:06:05.894 LINK spdk_nvme_discover 00:06:05.894 CXX test/cpp_headers/blobfs.o 00:06:06.181 LINK spdk_nvme_identify 00:06:06.181 CC test/event/reactor/reactor.o 00:06:06.181 CXX test/cpp_headers/blob.o 00:06:06.181 CC app/spdk_top/spdk_top.o 00:06:06.181 LINK spdk_nvme_perf 00:06:06.181 CC examples/blob/hello_world/hello_blob.o 00:06:06.181 CC test/event/reactor_perf/reactor_perf.o 00:06:06.181 LINK iscsi_fuzz 00:06:06.181 LINK reactor 00:06:06.444 CXX test/cpp_headers/conf.o 00:06:06.444 LINK accel_perf 00:06:06.444 CC test/event/app_repeat/app_repeat.o 00:06:06.444 LINK reactor_perf 00:06:06.444 CC test/event/scheduler/scheduler.o 00:06:06.444 LINK hello_blob 00:06:06.444 CXX test/cpp_headers/config.o 00:06:06.444 CXX test/cpp_headers/cpuset.o 00:06:06.444 LINK app_repeat 00:06:06.444 CC examples/nvme/hello_world/hello_world.o 00:06:06.444 CC test/rpc_client/rpc_client_test.o 00:06:06.703 CC test/nvme/reset/reset.o 00:06:06.703 CC test/nvme/aer/aer.o 00:06:06.703 CC test/nvme/sgl/sgl.o 00:06:06.703 CXX test/cpp_headers/crc16.o 00:06:06.703 LINK scheduler 00:06:06.703 LINK rpc_client_test 00:06:06.703 CC test/nvme/e2edp/nvme_dp.o 00:06:06.703 LINK hello_world 00:06:06.703 CC examples/blob/cli/blobcli.o 00:06:06.962 CXX test/cpp_headers/crc32.o 00:06:06.962 LINK reset 00:06:06.962 LINK aer 00:06:06.962 LINK sgl 00:06:06.962 CC test/nvme/overhead/overhead.o 00:06:06.962 CC test/nvme/err_injection/err_injection.o 00:06:06.962 CXX test/cpp_headers/crc64.o 00:06:06.962 CC examples/nvme/reconnect/reconnect.o 00:06:06.962 LINK nvme_dp 00:06:07.220 LINK spdk_top 00:06:07.220 CXX test/cpp_headers/dif.o 00:06:07.220 LINK err_injection 00:06:07.220 LINK overhead 00:06:07.220 CC test/accel/dif/dif.o 00:06:07.220 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:07.220 CC test/blobfs/mkfs/mkfs.o 00:06:07.220 LINK blobcli 00:06:07.479 CXX test/cpp_headers/dma.o 00:06:07.479 CC test/lvol/esnap/esnap.o 00:06:07.479 CC app/vhost/vhost.o 00:06:07.479 LINK reconnect 00:06:07.479 CC examples/nvme/arbitration/arbitration.o 00:06:07.479 CC test/nvme/startup/startup.o 00:06:07.479 LINK mkfs 00:06:07.479 CXX test/cpp_headers/endian.o 00:06:07.737 LINK vhost 00:06:07.737 CC test/nvme/reserve/reserve.o 00:06:07.737 CC test/nvme/simple_copy/simple_copy.o 00:06:07.737 LINK startup 00:06:07.737 CXX test/cpp_headers/env_dpdk.o 00:06:07.737 LINK nvme_manage 00:06:07.737 CC examples/nvme/hotplug/hotplug.o 00:06:07.994 LINK reserve 00:06:07.994 LINK arbitration 00:06:07.994 CXX test/cpp_headers/env.o 00:06:07.994 LINK simple_copy 00:06:07.994 LINK dif 00:06:07.994 CC app/spdk_dd/spdk_dd.o 00:06:07.994 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:07.994 CC examples/nvme/abort/abort.o 00:06:07.994 LINK hotplug 00:06:07.994 CC test/nvme/connect_stress/connect_stress.o 00:06:07.994 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:08.252 CXX test/cpp_headers/event.o 00:06:08.252 CC test/nvme/boot_partition/boot_partition.o 00:06:08.252 LINK cmb_copy 00:06:08.252 CXX test/cpp_headers/fd_group.o 00:06:08.252 LINK connect_stress 00:06:08.252 LINK pmr_persistence 00:06:08.510 LINK boot_partition 00:06:08.510 CC examples/bdev/hello_world/hello_bdev.o 00:06:08.510 LINK spdk_dd 00:06:08.510 CC test/nvme/compliance/nvme_compliance.o 00:06:08.510 LINK abort 00:06:08.510 CXX test/cpp_headers/fd.o 00:06:08.510 CC test/bdev/bdevio/bdevio.o 00:06:08.510 CC test/nvme/fused_ordering/fused_ordering.o 00:06:08.510 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:08.767 CC test/nvme/fdp/fdp.o 00:06:08.767 CXX test/cpp_headers/file.o 00:06:08.767 LINK hello_bdev 00:06:08.767 CC test/nvme/cuse/cuse.o 00:06:08.767 LINK fused_ordering 00:06:08.767 LINK doorbell_aers 00:06:08.767 CC app/fio/nvme/fio_plugin.o 00:06:08.767 LINK nvme_compliance 00:06:08.767 CXX test/cpp_headers/fsdev.o 00:06:09.024 LINK bdevio 00:06:09.024 CXX test/cpp_headers/fsdev_module.o 00:06:09.024 LINK fdp 00:06:09.024 CXX test/cpp_headers/ftl.o 00:06:09.024 CC examples/bdev/bdevperf/bdevperf.o 00:06:09.024 CXX test/cpp_headers/fuse_dispatcher.o 00:06:09.024 CXX test/cpp_headers/gpt_spec.o 00:06:09.024 CC app/fio/bdev/fio_plugin.o 00:06:09.283 CXX test/cpp_headers/hexlify.o 00:06:09.283 CXX test/cpp_headers/histogram_data.o 00:06:09.283 CXX test/cpp_headers/idxd.o 00:06:09.283 CXX test/cpp_headers/idxd_spec.o 00:06:09.283 CXX test/cpp_headers/init.o 00:06:09.283 CXX test/cpp_headers/ioat.o 00:06:09.283 CXX test/cpp_headers/ioat_spec.o 00:06:09.542 CXX test/cpp_headers/iscsi_spec.o 00:06:09.542 CXX test/cpp_headers/json.o 00:06:09.542 LINK spdk_nvme 00:06:09.542 CXX test/cpp_headers/jsonrpc.o 00:06:09.542 CXX test/cpp_headers/keyring.o 00:06:09.542 CXX test/cpp_headers/keyring_module.o 00:06:09.542 CXX test/cpp_headers/likely.o 00:06:09.542 CXX test/cpp_headers/log.o 00:06:09.542 CXX test/cpp_headers/lvol.o 00:06:09.542 LINK spdk_bdev 00:06:09.542 CXX test/cpp_headers/md5.o 00:06:09.801 CXX test/cpp_headers/memory.o 00:06:09.801 CXX test/cpp_headers/mmio.o 00:06:09.801 CXX test/cpp_headers/nbd.o 00:06:09.801 CXX test/cpp_headers/net.o 00:06:09.801 CXX test/cpp_headers/notify.o 00:06:09.801 CXX test/cpp_headers/nvme.o 00:06:09.801 CXX test/cpp_headers/nvme_intel.o 00:06:09.801 LINK bdevperf 00:06:09.801 CXX test/cpp_headers/nvme_ocssd.o 00:06:09.801 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:09.801 CXX test/cpp_headers/nvme_spec.o 00:06:10.058 CXX test/cpp_headers/nvme_zns.o 00:06:10.059 CXX test/cpp_headers/nvmf_cmd.o 00:06:10.059 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:10.059 CXX test/cpp_headers/nvmf.o 00:06:10.059 CXX test/cpp_headers/nvmf_spec.o 00:06:10.059 CXX test/cpp_headers/nvmf_transport.o 00:06:10.059 CXX test/cpp_headers/opal.o 00:06:10.059 CXX test/cpp_headers/opal_spec.o 00:06:10.059 LINK cuse 00:06:10.059 CXX test/cpp_headers/pci_ids.o 00:06:10.318 CXX test/cpp_headers/pipe.o 00:06:10.318 CXX test/cpp_headers/queue.o 00:06:10.318 CXX test/cpp_headers/reduce.o 00:06:10.318 CXX test/cpp_headers/rpc.o 00:06:10.318 CC examples/nvmf/nvmf/nvmf.o 00:06:10.318 CXX test/cpp_headers/scheduler.o 00:06:10.318 CXX test/cpp_headers/scsi.o 00:06:10.318 CXX test/cpp_headers/scsi_spec.o 00:06:10.318 CXX test/cpp_headers/sock.o 00:06:10.318 CXX test/cpp_headers/stdinc.o 00:06:10.318 CXX test/cpp_headers/string.o 00:06:10.318 CXX test/cpp_headers/thread.o 00:06:10.318 CXX test/cpp_headers/trace.o 00:06:10.577 CXX test/cpp_headers/trace_parser.o 00:06:10.577 CXX test/cpp_headers/tree.o 00:06:10.577 CXX test/cpp_headers/ublk.o 00:06:10.577 CXX test/cpp_headers/util.o 00:06:10.577 CXX test/cpp_headers/uuid.o 00:06:10.577 CXX test/cpp_headers/version.o 00:06:10.577 CXX test/cpp_headers/vfio_user_pci.o 00:06:10.577 CXX test/cpp_headers/vfio_user_spec.o 00:06:10.577 CXX test/cpp_headers/vhost.o 00:06:10.577 LINK nvmf 00:06:10.577 CXX test/cpp_headers/vmd.o 00:06:10.577 CXX test/cpp_headers/xor.o 00:06:10.836 CXX test/cpp_headers/zipf.o 00:06:12.737 LINK esnap 00:06:12.995 00:06:12.995 real 1m24.792s 00:06:12.995 user 6m50.710s 00:06:12.995 sys 1m17.429s 00:06:12.995 11:36:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:12.995 11:36:43 make -- common/autotest_common.sh@10 -- $ set +x 00:06:12.995 ************************************ 00:06:12.995 END TEST make 00:06:12.995 ************************************ 00:06:12.995 11:36:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:12.996 11:36:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:12.996 11:36:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:12.996 11:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.996 11:36:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:12.996 11:36:43 -- pm/common@44 -- $ pid=5983 00:06:12.996 11:36:43 -- pm/common@50 -- $ kill -TERM 5983 00:06:12.996 11:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.996 11:36:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:12.996 11:36:43 -- pm/common@44 -- $ pid=5985 00:06:12.996 11:36:43 -- pm/common@50 -- $ kill -TERM 5985 00:06:12.996 11:36:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:12.996 11:36:43 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:13.255 11:36:43 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.255 11:36:43 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.255 11:36:43 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.255 11:36:43 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.255 11:36:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.255 11:36:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.255 11:36:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.255 11:36:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.255 11:36:43 -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.255 11:36:43 -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.255 11:36:43 -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.255 11:36:43 -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.255 11:36:43 -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.255 11:36:43 -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.255 11:36:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.255 11:36:43 -- scripts/common.sh@344 -- # case "$op" in 00:06:13.255 11:36:43 -- scripts/common.sh@345 -- # : 1 00:06:13.255 11:36:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.255 11:36:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.255 11:36:43 -- scripts/common.sh@365 -- # decimal 1 00:06:13.255 11:36:43 -- scripts/common.sh@353 -- # local d=1 00:06:13.255 11:36:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.255 11:36:43 -- scripts/common.sh@355 -- # echo 1 00:06:13.255 11:36:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.255 11:36:43 -- scripts/common.sh@366 -- # decimal 2 00:06:13.255 11:36:43 -- scripts/common.sh@353 -- # local d=2 00:06:13.255 11:36:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.255 11:36:43 -- scripts/common.sh@355 -- # echo 2 00:06:13.255 11:36:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.256 11:36:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.256 11:36:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.256 11:36:43 -- scripts/common.sh@368 -- # return 0 00:06:13.256 11:36:43 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.256 11:36:43 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.256 --rc genhtml_branch_coverage=1 00:06:13.256 --rc genhtml_function_coverage=1 00:06:13.256 --rc genhtml_legend=1 00:06:13.256 --rc geninfo_all_blocks=1 00:06:13.256 --rc geninfo_unexecuted_blocks=1 00:06:13.256 00:06:13.256 ' 00:06:13.256 11:36:43 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.256 --rc genhtml_branch_coverage=1 00:06:13.256 --rc genhtml_function_coverage=1 00:06:13.256 --rc genhtml_legend=1 00:06:13.256 --rc geninfo_all_blocks=1 00:06:13.256 --rc geninfo_unexecuted_blocks=1 00:06:13.256 00:06:13.256 ' 00:06:13.256 11:36:43 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.256 --rc genhtml_branch_coverage=1 00:06:13.256 --rc genhtml_function_coverage=1 00:06:13.256 --rc genhtml_legend=1 00:06:13.256 --rc geninfo_all_blocks=1 00:06:13.256 --rc geninfo_unexecuted_blocks=1 00:06:13.256 00:06:13.256 ' 00:06:13.256 11:36:43 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.256 --rc genhtml_branch_coverage=1 00:06:13.256 --rc genhtml_function_coverage=1 00:06:13.256 --rc genhtml_legend=1 00:06:13.256 --rc geninfo_all_blocks=1 00:06:13.256 --rc geninfo_unexecuted_blocks=1 00:06:13.256 00:06:13.256 ' 00:06:13.256 11:36:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.256 11:36:43 -- nvmf/common.sh@7 -- # uname -s 00:06:13.256 11:36:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.256 11:36:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.256 11:36:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.256 11:36:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.256 11:36:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.256 11:36:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.256 11:36:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.256 11:36:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.256 11:36:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.256 11:36:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.256 11:36:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:06:13.256 11:36:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:06:13.256 11:36:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.256 11:36:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.256 11:36:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:13.256 11:36:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.256 11:36:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.256 11:36:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.256 11:36:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.256 11:36:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.256 11:36:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.256 11:36:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.256 11:36:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.256 11:36:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.256 11:36:43 -- paths/export.sh@5 -- # export PATH 00:06:13.256 11:36:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.256 11:36:43 -- nvmf/common.sh@51 -- # : 0 00:06:13.256 11:36:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.256 11:36:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.256 11:36:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.256 11:36:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.256 11:36:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.256 11:36:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.256 11:36:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.256 11:36:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.256 11:36:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.256 11:36:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:13.256 11:36:43 -- spdk/autotest.sh@32 -- # uname -s 00:06:13.256 11:36:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:13.256 11:36:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:13.256 11:36:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:13.256 11:36:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:13.256 11:36:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:13.256 11:36:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:13.256 11:36:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:13.256 11:36:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:13.256 11:36:43 -- spdk/autotest.sh@48 -- # udevadm_pid=68459 00:06:13.256 11:36:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:13.256 11:36:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:13.256 11:36:43 -- pm/common@17 -- # local monitor 00:06:13.256 11:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.256 11:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:13.256 11:36:43 -- pm/common@21 -- # date +%s 00:06:13.256 11:36:43 -- pm/common@25 -- # sleep 1 00:06:13.256 11:36:43 -- pm/common@21 -- # date +%s 00:06:13.256 11:36:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732793803 00:06:13.256 11:36:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732793803 00:06:13.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732793803_collect-cpu-load.pm.log 00:06:13.256 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732793803_collect-vmstat.pm.log 00:06:14.240 11:36:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:14.240 11:36:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:14.240 11:36:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.240 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.240 11:36:44 -- spdk/autotest.sh@59 -- # create_test_list 00:06:14.240 11:36:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:14.240 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.497 11:36:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:14.497 11:36:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:14.497 11:36:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:14.497 11:36:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:14.497 11:36:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:14.497 11:36:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:14.498 11:36:44 -- common/autotest_common.sh@1457 -- # uname 00:06:14.498 11:36:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:14.498 11:36:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:14.498 11:36:44 -- common/autotest_common.sh@1477 -- # uname 00:06:14.498 11:36:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:14.498 11:36:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:14.498 11:36:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:14.498 lcov: LCOV version 1.15 00:06:14.498 11:36:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:32.581 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:32.581 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:47.468 11:37:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:47.468 11:37:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.468 11:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:47.468 11:37:16 -- spdk/autotest.sh@78 -- # rm -f 00:06:47.468 11:37:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.468 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:47.468 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:47.468 11:37:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:47.468 11:37:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:47.468 11:37:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:47.468 11:37:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:47.468 11:37:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.468 11:37:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:47.468 11:37:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:47.468 11:37:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.468 11:37:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:47.468 11:37:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:47.468 11:37:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.468 11:37:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:47.468 11:37:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:47.468 11:37:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.468 11:37:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:47.468 11:37:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:47.468 11:37:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:47.468 11:37:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.468 11:37:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:47.468 11:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.468 11:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.468 11:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:47.468 11:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:47.468 11:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:47.468 No valid GPT data, bailing 00:06:47.468 11:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:47.468 11:37:17 -- scripts/common.sh@394 -- # pt= 00:06:47.468 11:37:17 -- scripts/common.sh@395 -- # return 1 00:06:47.468 11:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:47.468 1+0 records in 00:06:47.468 1+0 records out 00:06:47.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048443 s, 216 MB/s 00:06:47.468 11:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.468 11:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.468 11:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:47.468 11:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:47.468 11:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:47.468 No valid GPT data, bailing 00:06:47.468 11:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:47.468 11:37:17 -- scripts/common.sh@394 -- # pt= 00:06:47.468 11:37:17 -- scripts/common.sh@395 -- # return 1 00:06:47.468 11:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:47.468 1+0 records in 00:06:47.468 1+0 records out 00:06:47.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501604 s, 209 MB/s 00:06:47.468 11:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.468 11:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.468 11:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:47.468 11:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:47.468 11:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:47.728 No valid GPT data, bailing 00:06:47.728 11:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:47.728 11:37:17 -- scripts/common.sh@394 -- # pt= 00:06:47.728 11:37:17 -- scripts/common.sh@395 -- # return 1 00:06:47.728 11:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:47.728 1+0 records in 00:06:47.728 1+0 records out 00:06:47.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460788 s, 228 MB/s 00:06:47.728 11:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:47.728 11:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:47.728 11:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:47.728 11:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:47.728 11:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:47.728 No valid GPT data, bailing 00:06:47.728 11:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:47.728 11:37:17 -- scripts/common.sh@394 -- # pt= 00:06:47.728 11:37:17 -- scripts/common.sh@395 -- # return 1 00:06:47.728 11:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:47.728 1+0 records in 00:06:47.728 1+0 records out 00:06:47.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534846 s, 196 MB/s 00:06:47.728 11:37:17 -- spdk/autotest.sh@105 -- # sync 00:06:47.728 11:37:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:47.728 11:37:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:47.728 11:37:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:50.265 11:37:19 -- spdk/autotest.sh@111 -- # uname -s 00:06:50.265 11:37:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:50.265 11:37:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:50.265 11:37:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:50.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.523 Hugepages 00:06:50.523 node hugesize free / total 00:06:50.523 node0 1048576kB 0 / 0 00:06:50.523 node0 2048kB 0 / 0 00:06:50.523 00:06:50.523 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:50.523 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:50.782 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:50.782 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:50.782 11:37:20 -- spdk/autotest.sh@117 -- # uname -s 00:06:50.782 11:37:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:50.782 11:37:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:50.782 11:37:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:51.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.609 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.609 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:51.609 11:37:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:52.545 11:37:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:52.545 11:37:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:52.545 11:37:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:52.545 11:37:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:52.545 11:37:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:52.545 11:37:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:52.545 11:37:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:52.545 11:37:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:52.545 11:37:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:52.804 11:37:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:52.804 11:37:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:52.804 11:37:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:53.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.062 Waiting for block devices as requested 00:06:53.062 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.320 11:37:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:53.320 11:37:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:53.320 11:37:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:53.320 11:37:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:53.320 11:37:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1543 -- # continue 00:06:53.320 11:37:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:53.320 11:37:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:53.320 11:37:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:53.320 11:37:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:53.320 11:37:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:53.320 11:37:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:53.320 11:37:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:53.320 11:37:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:53.320 11:37:23 -- common/autotest_common.sh@1543 -- # continue 00:06:53.320 11:37:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:53.320 11:37:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.320 11:37:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.320 11:37:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:53.320 11:37:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.320 11:37:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.320 11:37:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:54.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.276 11:37:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:54.276 11:37:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.276 11:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.276 11:37:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:54.276 11:37:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:54.276 11:37:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:54.276 11:37:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:54.276 11:37:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:54.276 11:37:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:54.276 11:37:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:54.276 11:37:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:54.276 11:37:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:54.276 11:37:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:54.276 11:37:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:54.276 11:37:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.276 11:37:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:54.276 11:37:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:54.276 11:37:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:54.276 11:37:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:54.276 11:37:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:54.276 11:37:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:54.276 11:37:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:54.276 11:37:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:54.276 11:37:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:54.276 11:37:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:54.276 11:37:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:54.276 11:37:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:54.276 11:37:24 -- common/autotest_common.sh@1572 -- # return 0 00:06:54.276 11:37:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:54.276 11:37:24 -- common/autotest_common.sh@1580 -- # return 0 00:06:54.276 11:37:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:54.276 11:37:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:54.276 11:37:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:54.276 11:37:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:54.276 11:37:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:54.276 11:37:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.276 11:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.276 11:37:24 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:54.276 11:37:24 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:54.276 11:37:24 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:54.276 11:37:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:54.276 11:37:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.276 11:37:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.276 11:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.276 ************************************ 00:06:54.276 START TEST env 00:06:54.276 ************************************ 00:06:54.276 11:37:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:54.535 * Looking for test storage... 00:06:54.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:54.535 11:37:24 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.535 11:37:24 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.535 11:37:24 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.535 11:37:24 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.535 11:37:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.535 11:37:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.535 11:37:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.535 11:37:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.535 11:37:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.535 11:37:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.535 11:37:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.535 11:37:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.535 11:37:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.535 11:37:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.535 11:37:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.535 11:37:24 env -- scripts/common.sh@344 -- # case "$op" in 00:06:54.535 11:37:24 env -- scripts/common.sh@345 -- # : 1 00:06:54.535 11:37:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.535 11:37:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.535 11:37:24 env -- scripts/common.sh@365 -- # decimal 1 00:06:54.535 11:37:24 env -- scripts/common.sh@353 -- # local d=1 00:06:54.535 11:37:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.535 11:37:24 env -- scripts/common.sh@355 -- # echo 1 00:06:54.535 11:37:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.535 11:37:24 env -- scripts/common.sh@366 -- # decimal 2 00:06:54.535 11:37:24 env -- scripts/common.sh@353 -- # local d=2 00:06:54.535 11:37:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.535 11:37:24 env -- scripts/common.sh@355 -- # echo 2 00:06:54.535 11:37:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.535 11:37:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.535 11:37:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.535 11:37:24 env -- scripts/common.sh@368 -- # return 0 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.536 --rc genhtml_branch_coverage=1 00:06:54.536 --rc genhtml_function_coverage=1 00:06:54.536 --rc genhtml_legend=1 00:06:54.536 --rc geninfo_all_blocks=1 00:06:54.536 --rc geninfo_unexecuted_blocks=1 00:06:54.536 00:06:54.536 ' 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.536 --rc genhtml_branch_coverage=1 00:06:54.536 --rc genhtml_function_coverage=1 00:06:54.536 --rc genhtml_legend=1 00:06:54.536 --rc geninfo_all_blocks=1 00:06:54.536 --rc geninfo_unexecuted_blocks=1 00:06:54.536 00:06:54.536 ' 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.536 --rc genhtml_branch_coverage=1 00:06:54.536 --rc genhtml_function_coverage=1 00:06:54.536 --rc genhtml_legend=1 00:06:54.536 --rc geninfo_all_blocks=1 00:06:54.536 --rc geninfo_unexecuted_blocks=1 00:06:54.536 00:06:54.536 ' 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.536 --rc genhtml_branch_coverage=1 00:06:54.536 --rc genhtml_function_coverage=1 00:06:54.536 --rc genhtml_legend=1 00:06:54.536 --rc geninfo_all_blocks=1 00:06:54.536 --rc geninfo_unexecuted_blocks=1 00:06:54.536 00:06:54.536 ' 00:06:54.536 11:37:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.536 11:37:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.536 11:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:54.536 ************************************ 00:06:54.536 START TEST env_memory 00:06:54.536 ************************************ 00:06:54.536 11:37:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.536 00:06:54.536 00:06:54.536 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.536 http://cunit.sourceforge.net/ 00:06:54.536 00:06:54.536 00:06:54.536 Suite: memory 00:06:54.536 Test: alloc and free memory map ...[2024-11-28 11:37:24.644994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:54.795 passed 00:06:54.795 Test: mem map translation ...[2024-11-28 11:37:24.676302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:54.795 [2024-11-28 11:37:24.676350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:54.795 [2024-11-28 11:37:24.676405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:54.795 [2024-11-28 11:37:24.676417] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:54.795 passed 00:06:54.795 Test: mem map registration ...[2024-11-28 11:37:24.740181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:54.795 [2024-11-28 11:37:24.740225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:54.795 passed 00:06:54.795 Test: mem map adjacent registrations ...passed 00:06:54.795 00:06:54.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.795 suites 1 1 n/a 0 0 00:06:54.795 tests 4 4 4 0 0 00:06:54.795 asserts 152 152 152 0 n/a 00:06:54.795 00:06:54.795 Elapsed time = 0.214 seconds 00:06:54.795 00:06:54.795 real 0m0.230s 00:06:54.795 user 0m0.216s 00:06:54.795 sys 0m0.011s 00:06:54.795 11:37:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.795 11:37:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:54.795 ************************************ 00:06:54.795 END TEST env_memory 00:06:54.795 ************************************ 00:06:54.795 11:37:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:54.795 11:37:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.795 11:37:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.795 11:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:54.795 ************************************ 00:06:54.795 START TEST env_vtophys 00:06:54.795 ************************************ 00:06:54.795 11:37:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:54.795 EAL: lib.eal log level changed from notice to debug 00:06:54.795 EAL: Detected lcore 0 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 1 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 2 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 3 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 4 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 5 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 6 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 7 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 8 as core 0 on socket 0 00:06:54.795 EAL: Detected lcore 9 as core 0 on socket 0 00:06:54.795 EAL: Maximum logical cores by configuration: 128 00:06:54.795 EAL: Detected CPU lcores: 10 00:06:54.795 EAL: Detected NUMA nodes: 1 00:06:54.795 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:06:54.795 EAL: Detected shared linkage of DPDK 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:06:54.795 EAL: Registered [vdev] bus. 00:06:54.795 EAL: bus.vdev log level changed from disabled to notice 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:06:54.795 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:54.795 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:06:54.795 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:06:55.056 EAL: No shared files mode enabled, IPC will be disabled 00:06:55.056 EAL: No shared files mode enabled, IPC is disabled 00:06:55.056 EAL: Selected IOVA mode 'PA' 00:06:55.056 EAL: Probing VFIO support... 00:06:55.056 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:55.056 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:55.056 EAL: Ask a virtual area of 0x2e000 bytes 00:06:55.056 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:55.056 EAL: Setting up physically contiguous memory... 00:06:55.056 EAL: Setting maximum number of open files to 524288 00:06:55.056 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:55.056 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:55.056 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.056 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:55.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.056 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.056 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:55.056 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:55.056 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.056 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:55.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.056 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.056 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:55.056 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:55.056 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.056 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:55.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.056 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.056 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:55.056 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:55.056 EAL: Ask a virtual area of 0x61000 bytes 00:06:55.056 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:55.056 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:55.056 EAL: Ask a virtual area of 0x400000000 bytes 00:06:55.056 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:55.056 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:55.056 EAL: Hugepages will be freed exactly as allocated. 00:06:55.056 EAL: No shared files mode enabled, IPC is disabled 00:06:55.056 EAL: No shared files mode enabled, IPC is disabled 00:06:55.056 EAL: TSC frequency is ~2200000 KHz 00:06:55.056 EAL: Main lcore 0 is ready (tid=7fb80f136a00;cpuset=[0]) 00:06:55.056 EAL: Trying to obtain current memory policy. 00:06:55.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.056 EAL: Restoring previous memory policy: 0 00:06:55.056 EAL: request: mp_malloc_sync 00:06:55.056 EAL: No shared files mode enabled, IPC is disabled 00:06:55.056 EAL: Heap on socket 0 was expanded by 2MB 00:06:55.057 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Mem event callback 'spdk:(nil)' registered 00:06:55.057 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:55.057 00:06:55.057 00:06:55.057 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.057 http://cunit.sourceforge.net/ 00:06:55.057 00:06:55.057 00:06:55.057 Suite: components_suite 00:06:55.057 Test: vtophys_malloc_test ...passed 00:06:55.057 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 4MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 4MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 6MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 6MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 10MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 10MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 18MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 18MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 34MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 34MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 66MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was shrunk by 66MB 00:06:55.057 EAL: Trying to obtain current memory policy. 00:06:55.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.057 EAL: Restoring previous memory policy: 4 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.057 EAL: request: mp_malloc_sync 00:06:55.057 EAL: No shared files mode enabled, IPC is disabled 00:06:55.057 EAL: Heap on socket 0 was expanded by 130MB 00:06:55.057 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.316 EAL: request: mp_malloc_sync 00:06:55.316 EAL: No shared files mode enabled, IPC is disabled 00:06:55.316 EAL: Heap on socket 0 was shrunk by 130MB 00:06:55.316 EAL: Trying to obtain current memory policy. 00:06:55.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.317 EAL: Restoring previous memory policy: 4 00:06:55.317 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.317 EAL: request: mp_malloc_sync 00:06:55.317 EAL: No shared files mode enabled, IPC is disabled 00:06:55.317 EAL: Heap on socket 0 was expanded by 258MB 00:06:55.317 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.317 EAL: request: mp_malloc_sync 00:06:55.317 EAL: No shared files mode enabled, IPC is disabled 00:06:55.317 EAL: Heap on socket 0 was shrunk by 258MB 00:06:55.317 EAL: Trying to obtain current memory policy. 00:06:55.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.576 EAL: Restoring previous memory policy: 4 00:06:55.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.576 EAL: request: mp_malloc_sync 00:06:55.576 EAL: No shared files mode enabled, IPC is disabled 00:06:55.576 EAL: Heap on socket 0 was expanded by 514MB 00:06:55.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.576 EAL: request: mp_malloc_sync 00:06:55.576 EAL: No shared files mode enabled, IPC is disabled 00:06:55.576 EAL: Heap on socket 0 was shrunk by 514MB 00:06:55.576 EAL: Trying to obtain current memory policy. 00:06:55.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.146 EAL: Restoring previous memory policy: 4 00:06:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.146 EAL: request: mp_malloc_sync 00:06:56.146 EAL: No shared files mode enabled, IPC is disabled 00:06:56.146 EAL: Heap on socket 0 was expanded by 1026MB 00:06:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.405 passed 00:06:56.405 00:06:56.405 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.405 suites 1 1 n/a 0 0 00:06:56.405 tests 2 2 2 0 0 00:06:56.405 asserts 5862 5862 5862 0 n/a 00:06:56.405 00:06:56.405 Elapsed time = 1.289 seconds 00:06:56.405 EAL: request: mp_malloc_sync 00:06:56.405 EAL: No shared files mode enabled, IPC is disabled 00:06:56.405 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:56.405 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.405 EAL: request: mp_malloc_sync 00:06:56.405 EAL: No shared files mode enabled, IPC is disabled 00:06:56.405 EAL: Heap on socket 0 was shrunk by 2MB 00:06:56.405 EAL: No shared files mode enabled, IPC is disabled 00:06:56.405 EAL: No shared files mode enabled, IPC is disabled 00:06:56.405 EAL: No shared files mode enabled, IPC is disabled 00:06:56.405 00:06:56.405 real 0m1.502s 00:06:56.405 user 0m0.843s 00:06:56.405 sys 0m0.528s 00:06:56.405 11:37:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.405 11:37:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 ************************************ 00:06:56.405 END TEST env_vtophys 00:06:56.405 ************************************ 00:06:56.405 11:37:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:56.405 11:37:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.405 11:37:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.405 11:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 ************************************ 00:06:56.405 START TEST env_pci 00:06:56.405 ************************************ 00:06:56.405 11:37:26 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:56.405 00:06:56.405 00:06:56.405 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.405 http://cunit.sourceforge.net/ 00:06:56.405 00:06:56.405 00:06:56.405 Suite: pci 00:06:56.405 Test: pci_hook ...[2024-11-28 11:37:26.452819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70729 has claimed it 00:06:56.405 passed 00:06:56.405 00:06:56.405 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.405 suites 1 1 n/a 0 0 00:06:56.405 tests 1 1 1 0 0 00:06:56.405 asserts 25 25 25 0 n/a 00:06:56.405 00:06:56.405 Elapsed time = 0.002 seconds 00:06:56.405 EAL: Cannot find device (10000:00:01.0) 00:06:56.405 EAL: Failed to attach device on primary process 00:06:56.405 00:06:56.405 real 0m0.022s 00:06:56.405 user 0m0.012s 00:06:56.405 sys 0m0.009s 00:06:56.405 11:37:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.405 11:37:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 ************************************ 00:06:56.405 END TEST env_pci 00:06:56.405 ************************************ 00:06:56.405 11:37:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:56.405 11:37:26 env -- env/env.sh@15 -- # uname 00:06:56.405 11:37:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:56.405 11:37:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:56.405 11:37:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:56.405 11:37:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.405 11:37:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.405 11:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:56.406 ************************************ 00:06:56.406 START TEST env_dpdk_post_init 00:06:56.406 ************************************ 00:06:56.406 11:37:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:56.665 EAL: Detected CPU lcores: 10 00:06:56.665 EAL: Detected NUMA nodes: 1 00:06:56.665 EAL: Detected shared linkage of DPDK 00:06:56.665 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:56.665 EAL: Selected IOVA mode 'PA' 00:06:56.665 Starting DPDK initialization... 00:06:56.665 Starting SPDK post initialization... 00:06:56.665 SPDK NVMe probe 00:06:56.665 Attaching to 0000:00:10.0 00:06:56.665 Attaching to 0000:00:11.0 00:06:56.665 Attached to 0000:00:10.0 00:06:56.665 Attached to 0000:00:11.0 00:06:56.665 Cleaning up... 00:06:56.665 00:06:56.665 real 0m0.202s 00:06:56.665 user 0m0.066s 00:06:56.665 sys 0m0.037s 00:06:56.665 11:37:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.665 11:37:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:56.665 ************************************ 00:06:56.665 END TEST env_dpdk_post_init 00:06:56.665 ************************************ 00:06:56.665 11:37:26 env -- env/env.sh@26 -- # uname 00:06:56.665 11:37:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:56.665 11:37:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:56.665 11:37:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.665 11:37:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.665 11:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:56.665 ************************************ 00:06:56.665 START TEST env_mem_callbacks 00:06:56.665 ************************************ 00:06:56.665 11:37:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:56.924 EAL: Detected CPU lcores: 10 00:06:56.924 EAL: Detected NUMA nodes: 1 00:06:56.924 EAL: Detected shared linkage of DPDK 00:06:56.924 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:56.924 EAL: Selected IOVA mode 'PA' 00:06:56.924 00:06:56.924 00:06:56.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.924 http://cunit.sourceforge.net/ 00:06:56.924 00:06:56.924 00:06:56.924 Suite: memory 00:06:56.924 Test: test ... 00:06:56.924 register 0x200000200000 2097152 00:06:56.924 malloc 3145728 00:06:56.924 register 0x200000400000 4194304 00:06:56.924 buf 0x200000500000 len 3145728 PASSED 00:06:56.924 malloc 64 00:06:56.924 buf 0x2000004fff40 len 64 PASSED 00:06:56.924 malloc 4194304 00:06:56.924 register 0x200000800000 6291456 00:06:56.924 buf 0x200000a00000 len 4194304 PASSED 00:06:56.924 free 0x200000500000 3145728 00:06:56.924 free 0x2000004fff40 64 00:06:56.924 unregister 0x200000400000 4194304 PASSED 00:06:56.924 free 0x200000a00000 4194304 00:06:56.924 unregister 0x200000800000 6291456 PASSED 00:06:56.924 malloc 8388608 00:06:56.924 register 0x200000400000 10485760 00:06:56.924 buf 0x200000600000 len 8388608 PASSED 00:06:56.924 free 0x200000600000 8388608 00:06:56.924 unregister 0x200000400000 10485760 PASSED 00:06:56.924 passed 00:06:56.924 00:06:56.924 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.924 suites 1 1 n/a 0 0 00:06:56.924 tests 1 1 1 0 0 00:06:56.924 asserts 15 15 15 0 n/a 00:06:56.924 00:06:56.924 Elapsed time = 0.009 seconds 00:06:56.924 00:06:56.924 real 0m0.148s 00:06:56.924 user 0m0.019s 00:06:56.924 sys 0m0.028s 00:06:56.924 11:37:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.924 11:37:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:56.924 ************************************ 00:06:56.924 END TEST env_mem_callbacks 00:06:56.924 ************************************ 00:06:56.924 00:06:56.924 real 0m2.555s 00:06:56.924 user 0m1.354s 00:06:56.924 sys 0m0.854s 00:06:56.924 11:37:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.924 ************************************ 00:06:56.924 END TEST env 00:06:56.924 ************************************ 00:06:56.924 11:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:56.924 11:37:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:56.924 11:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.924 11:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.924 11:37:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.924 ************************************ 00:06:56.924 START TEST rpc 00:06:56.924 ************************************ 00:06:56.924 11:37:27 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:57.183 * Looking for test storage... 00:06:57.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:57.183 11:37:27 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.183 11:37:27 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.183 11:37:27 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.183 11:37:27 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.183 11:37:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.183 11:37:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.183 11:37:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.183 11:37:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.183 11:37:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.183 11:37:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.183 11:37:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.183 11:37:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.183 11:37:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.184 11:37:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.184 11:37:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:57.184 11:37:27 rpc -- scripts/common.sh@345 -- # : 1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.184 11:37:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.184 11:37:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@353 -- # local d=1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.184 11:37:27 rpc -- scripts/common.sh@355 -- # echo 1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.184 11:37:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:57.184 11:37:27 rpc -- scripts/common.sh@353 -- # local d=2 00:06:57.184 11:37:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.184 11:37:27 rpc -- scripts/common.sh@355 -- # echo 2 00:06:57.184 11:37:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.184 11:37:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.184 11:37:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.184 11:37:27 rpc -- scripts/common.sh@368 -- # return 0 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.184 --rc genhtml_branch_coverage=1 00:06:57.184 --rc genhtml_function_coverage=1 00:06:57.184 --rc genhtml_legend=1 00:06:57.184 --rc geninfo_all_blocks=1 00:06:57.184 --rc geninfo_unexecuted_blocks=1 00:06:57.184 00:06:57.184 ' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.184 --rc genhtml_branch_coverage=1 00:06:57.184 --rc genhtml_function_coverage=1 00:06:57.184 --rc genhtml_legend=1 00:06:57.184 --rc geninfo_all_blocks=1 00:06:57.184 --rc geninfo_unexecuted_blocks=1 00:06:57.184 00:06:57.184 ' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.184 --rc genhtml_branch_coverage=1 00:06:57.184 --rc genhtml_function_coverage=1 00:06:57.184 --rc genhtml_legend=1 00:06:57.184 --rc geninfo_all_blocks=1 00:06:57.184 --rc geninfo_unexecuted_blocks=1 00:06:57.184 00:06:57.184 ' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.184 --rc genhtml_branch_coverage=1 00:06:57.184 --rc genhtml_function_coverage=1 00:06:57.184 --rc genhtml_legend=1 00:06:57.184 --rc geninfo_all_blocks=1 00:06:57.184 --rc geninfo_unexecuted_blocks=1 00:06:57.184 00:06:57.184 ' 00:06:57.184 11:37:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70846 00:06:57.184 11:37:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.184 11:37:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:57.184 11:37:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70846 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 70846 ']' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.184 11:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.444 [2024-11-28 11:37:27.316366] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:57.444 [2024-11-28 11:37:27.316516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70846 ] 00:06:57.444 [2024-11-28 11:37:27.449760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.444 [2024-11-28 11:37:27.479981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.444 [2024-11-28 11:37:27.522653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:57.444 [2024-11-28 11:37:27.522753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70846' to capture a snapshot of events at runtime. 00:06:57.444 [2024-11-28 11:37:27.522767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.444 [2024-11-28 11:37:27.522777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.444 [2024-11-28 11:37:27.522786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70846 for offline analysis/debug. 00:06:57.444 [2024-11-28 11:37:27.523286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.704 [2024-11-28 11:37:27.593251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.704 11:37:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.704 11:37:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.704 11:37:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:57.704 11:37:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:57.704 11:37:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:57.704 11:37:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:57.704 11:37:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.704 11:37:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.704 11:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.704 ************************************ 00:06:57.704 START TEST rpc_integrity 00:06:57.704 ************************************ 00:06:57.704 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:57.704 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.704 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.704 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.704 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.704 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.704 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.964 { 00:06:57.964 "name": "Malloc0", 00:06:57.964 "aliases": [ 00:06:57.964 "99223e08-f54e-4388-bb82-45728251ac44" 00:06:57.964 ], 00:06:57.964 "product_name": "Malloc disk", 00:06:57.964 "block_size": 512, 00:06:57.964 "num_blocks": 16384, 00:06:57.964 "uuid": "99223e08-f54e-4388-bb82-45728251ac44", 00:06:57.964 "assigned_rate_limits": { 00:06:57.964 "rw_ios_per_sec": 0, 00:06:57.964 "rw_mbytes_per_sec": 0, 00:06:57.964 "r_mbytes_per_sec": 0, 00:06:57.964 "w_mbytes_per_sec": 0 00:06:57.964 }, 00:06:57.964 "claimed": false, 00:06:57.964 "zoned": false, 00:06:57.964 "supported_io_types": { 00:06:57.964 "read": true, 00:06:57.964 "write": true, 00:06:57.964 "unmap": true, 00:06:57.964 "flush": true, 00:06:57.964 "reset": true, 00:06:57.964 "nvme_admin": false, 00:06:57.964 "nvme_io": false, 00:06:57.964 "nvme_io_md": false, 00:06:57.964 "write_zeroes": true, 00:06:57.964 "zcopy": true, 00:06:57.964 "get_zone_info": false, 00:06:57.964 "zone_management": false, 00:06:57.964 "zone_append": false, 00:06:57.964 "compare": false, 00:06:57.964 "compare_and_write": false, 00:06:57.964 "abort": true, 00:06:57.964 "seek_hole": false, 00:06:57.964 "seek_data": false, 00:06:57.964 "copy": true, 00:06:57.964 "nvme_iov_md": false 00:06:57.964 }, 00:06:57.964 "memory_domains": [ 00:06:57.964 { 00:06:57.964 "dma_device_id": "system", 00:06:57.964 "dma_device_type": 1 00:06:57.964 }, 00:06:57.964 { 00:06:57.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.964 "dma_device_type": 2 00:06:57.964 } 00:06:57.964 ], 00:06:57.964 "driver_specific": {} 00:06:57.964 } 00:06:57.964 ]' 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 [2024-11-28 11:37:27.962012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:57.964 [2024-11-28 11:37:27.962483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.964 [2024-11-28 11:37:27.962515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf0d030 00:06:57.964 [2024-11-28 11:37:27.962528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.964 [2024-11-28 11:37:27.964126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.964 [2024-11-28 11:37:27.964158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:57.964 Passthru0 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.964 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:57.964 { 00:06:57.964 "name": "Malloc0", 00:06:57.964 "aliases": [ 00:06:57.964 "99223e08-f54e-4388-bb82-45728251ac44" 00:06:57.964 ], 00:06:57.964 "product_name": "Malloc disk", 00:06:57.964 "block_size": 512, 00:06:57.964 "num_blocks": 16384, 00:06:57.964 "uuid": "99223e08-f54e-4388-bb82-45728251ac44", 00:06:57.964 "assigned_rate_limits": { 00:06:57.964 "rw_ios_per_sec": 0, 00:06:57.964 "rw_mbytes_per_sec": 0, 00:06:57.964 "r_mbytes_per_sec": 0, 00:06:57.964 "w_mbytes_per_sec": 0 00:06:57.964 }, 00:06:57.964 "claimed": true, 00:06:57.964 "claim_type": "exclusive_write", 00:06:57.964 "zoned": false, 00:06:57.964 "supported_io_types": { 00:06:57.964 "read": true, 00:06:57.964 "write": true, 00:06:57.964 "unmap": true, 00:06:57.964 "flush": true, 00:06:57.964 "reset": true, 00:06:57.964 "nvme_admin": false, 00:06:57.964 "nvme_io": false, 00:06:57.964 "nvme_io_md": false, 00:06:57.964 "write_zeroes": true, 00:06:57.964 "zcopy": true, 00:06:57.964 "get_zone_info": false, 00:06:57.964 "zone_management": false, 00:06:57.964 "zone_append": false, 00:06:57.964 "compare": false, 00:06:57.964 "compare_and_write": false, 00:06:57.964 "abort": true, 00:06:57.964 "seek_hole": false, 00:06:57.964 "seek_data": false, 00:06:57.964 "copy": true, 00:06:57.965 "nvme_iov_md": false 00:06:57.965 }, 00:06:57.965 "memory_domains": [ 00:06:57.965 { 00:06:57.965 "dma_device_id": "system", 00:06:57.965 "dma_device_type": 1 00:06:57.965 }, 00:06:57.965 { 00:06:57.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.965 "dma_device_type": 2 00:06:57.965 } 00:06:57.965 ], 00:06:57.965 "driver_specific": {} 00:06:57.965 }, 00:06:57.965 { 00:06:57.965 "name": "Passthru0", 00:06:57.965 "aliases": [ 00:06:57.965 "0d1ae657-c77a-557a-9cea-340bc2cd62f6" 00:06:57.965 ], 00:06:57.965 "product_name": "passthru", 00:06:57.965 "block_size": 512, 00:06:57.965 "num_blocks": 16384, 00:06:57.965 "uuid": "0d1ae657-c77a-557a-9cea-340bc2cd62f6", 00:06:57.965 "assigned_rate_limits": { 00:06:57.965 "rw_ios_per_sec": 0, 00:06:57.965 "rw_mbytes_per_sec": 0, 00:06:57.965 "r_mbytes_per_sec": 0, 00:06:57.965 "w_mbytes_per_sec": 0 00:06:57.965 }, 00:06:57.965 "claimed": false, 00:06:57.965 "zoned": false, 00:06:57.965 "supported_io_types": { 00:06:57.965 "read": true, 00:06:57.965 "write": true, 00:06:57.965 "unmap": true, 00:06:57.965 "flush": true, 00:06:57.965 "reset": true, 00:06:57.965 "nvme_admin": false, 00:06:57.965 "nvme_io": false, 00:06:57.965 "nvme_io_md": false, 00:06:57.965 "write_zeroes": true, 00:06:57.965 "zcopy": true, 00:06:57.965 "get_zone_info": false, 00:06:57.965 "zone_management": false, 00:06:57.965 "zone_append": false, 00:06:57.965 "compare": false, 00:06:57.965 "compare_and_write": false, 00:06:57.965 "abort": true, 00:06:57.965 "seek_hole": false, 00:06:57.965 "seek_data": false, 00:06:57.965 "copy": true, 00:06:57.965 "nvme_iov_md": false 00:06:57.965 }, 00:06:57.965 "memory_domains": [ 00:06:57.965 { 00:06:57.965 "dma_device_id": "system", 00:06:57.965 "dma_device_type": 1 00:06:57.965 }, 00:06:57.965 { 00:06:57.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.965 "dma_device_type": 2 00:06:57.965 } 00:06:57.965 ], 00:06:57.965 "driver_specific": { 00:06:57.965 "passthru": { 00:06:57.965 "name": "Passthru0", 00:06:57.965 "base_bdev_name": "Malloc0" 00:06:57.965 } 00:06:57.965 } 00:06:57.965 } 00:06:57.965 ]' 00:06:57.965 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.965 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:57.965 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:58.224 ************************************ 00:06:58.224 END TEST rpc_integrity 00:06:58.224 ************************************ 00:06:58.224 11:37:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:58.224 00:06:58.224 real 0m0.328s 00:06:58.224 user 0m0.222s 00:06:58.224 sys 0m0.039s 00:06:58.224 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.224 11:37:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 11:37:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:58.224 11:37:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.224 11:37:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.224 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 ************************************ 00:06:58.224 START TEST rpc_plugins 00:06:58.224 ************************************ 00:06:58.224 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:58.224 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:58.224 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.224 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.224 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:58.225 { 00:06:58.225 "name": "Malloc1", 00:06:58.225 "aliases": [ 00:06:58.225 "474df6c9-61e6-4d57-a81c-8d05a279aff0" 00:06:58.225 ], 00:06:58.225 "product_name": "Malloc disk", 00:06:58.225 "block_size": 4096, 00:06:58.225 "num_blocks": 256, 00:06:58.225 "uuid": "474df6c9-61e6-4d57-a81c-8d05a279aff0", 00:06:58.225 "assigned_rate_limits": { 00:06:58.225 "rw_ios_per_sec": 0, 00:06:58.225 "rw_mbytes_per_sec": 0, 00:06:58.225 "r_mbytes_per_sec": 0, 00:06:58.225 "w_mbytes_per_sec": 0 00:06:58.225 }, 00:06:58.225 "claimed": false, 00:06:58.225 "zoned": false, 00:06:58.225 "supported_io_types": { 00:06:58.225 "read": true, 00:06:58.225 "write": true, 00:06:58.225 "unmap": true, 00:06:58.225 "flush": true, 00:06:58.225 "reset": true, 00:06:58.225 "nvme_admin": false, 00:06:58.225 "nvme_io": false, 00:06:58.225 "nvme_io_md": false, 00:06:58.225 "write_zeroes": true, 00:06:58.225 "zcopy": true, 00:06:58.225 "get_zone_info": false, 00:06:58.225 "zone_management": false, 00:06:58.225 "zone_append": false, 00:06:58.225 "compare": false, 00:06:58.225 "compare_and_write": false, 00:06:58.225 "abort": true, 00:06:58.225 "seek_hole": false, 00:06:58.225 "seek_data": false, 00:06:58.225 "copy": true, 00:06:58.225 "nvme_iov_md": false 00:06:58.225 }, 00:06:58.225 "memory_domains": [ 00:06:58.225 { 00:06:58.225 "dma_device_id": "system", 00:06:58.225 "dma_device_type": 1 00:06:58.225 }, 00:06:58.225 { 00:06:58.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.225 "dma_device_type": 2 00:06:58.225 } 00:06:58.225 ], 00:06:58.225 "driver_specific": {} 00:06:58.225 } 00:06:58.225 ]' 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.225 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:58.225 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:58.484 ************************************ 00:06:58.484 END TEST rpc_plugins 00:06:58.484 ************************************ 00:06:58.484 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:58.484 00:06:58.484 real 0m0.195s 00:06:58.484 user 0m0.130s 00:06:58.484 sys 0m0.024s 00:06:58.484 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.484 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 11:37:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:58.484 11:37:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.484 11:37:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.484 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 ************************************ 00:06:58.484 START TEST rpc_trace_cmd_test 00:06:58.484 ************************************ 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:58.484 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70846", 00:06:58.484 "tpoint_group_mask": "0x8", 00:06:58.484 "iscsi_conn": { 00:06:58.484 "mask": "0x2", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "scsi": { 00:06:58.484 "mask": "0x4", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "bdev": { 00:06:58.484 "mask": "0x8", 00:06:58.484 "tpoint_mask": "0xffffffffffffffff" 00:06:58.484 }, 00:06:58.484 "nvmf_rdma": { 00:06:58.484 "mask": "0x10", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "nvmf_tcp": { 00:06:58.484 "mask": "0x20", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "ftl": { 00:06:58.484 "mask": "0x40", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "blobfs": { 00:06:58.484 "mask": "0x80", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "dsa": { 00:06:58.484 "mask": "0x200", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "thread": { 00:06:58.484 "mask": "0x400", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "nvme_pcie": { 00:06:58.484 "mask": "0x800", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "iaa": { 00:06:58.484 "mask": "0x1000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "nvme_tcp": { 00:06:58.484 "mask": "0x2000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "bdev_nvme": { 00:06:58.484 "mask": "0x4000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "sock": { 00:06:58.484 "mask": "0x8000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "blob": { 00:06:58.484 "mask": "0x10000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "bdev_raid": { 00:06:58.484 "mask": "0x20000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 }, 00:06:58.484 "scheduler": { 00:06:58.484 "mask": "0x40000", 00:06:58.484 "tpoint_mask": "0x0" 00:06:58.484 } 00:06:58.484 }' 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:58.484 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:58.744 ************************************ 00:06:58.744 END TEST rpc_trace_cmd_test 00:06:58.744 ************************************ 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:58.744 00:06:58.744 real 0m0.284s 00:06:58.744 user 0m0.241s 00:06:58.744 sys 0m0.033s 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.744 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 11:37:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:58.744 11:37:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:58.744 11:37:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:58.744 11:37:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.744 11:37:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.744 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 ************************************ 00:06:58.744 START TEST rpc_daemon_integrity 00:06:58.744 ************************************ 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:58.744 { 00:06:58.744 "name": "Malloc2", 00:06:58.744 "aliases": [ 00:06:58.744 "7045a1d6-77c9-40ec-a67f-aa59e0a9e728" 00:06:58.744 ], 00:06:58.744 "product_name": "Malloc disk", 00:06:58.744 "block_size": 512, 00:06:58.744 "num_blocks": 16384, 00:06:58.744 "uuid": "7045a1d6-77c9-40ec-a67f-aa59e0a9e728", 00:06:58.744 "assigned_rate_limits": { 00:06:58.744 "rw_ios_per_sec": 0, 00:06:58.744 "rw_mbytes_per_sec": 0, 00:06:58.744 "r_mbytes_per_sec": 0, 00:06:58.744 "w_mbytes_per_sec": 0 00:06:58.744 }, 00:06:58.744 "claimed": false, 00:06:58.744 "zoned": false, 00:06:58.744 "supported_io_types": { 00:06:58.744 "read": true, 00:06:58.744 "write": true, 00:06:58.744 "unmap": true, 00:06:58.744 "flush": true, 00:06:58.744 "reset": true, 00:06:58.744 "nvme_admin": false, 00:06:58.744 "nvme_io": false, 00:06:58.744 "nvme_io_md": false, 00:06:58.744 "write_zeroes": true, 00:06:58.744 "zcopy": true, 00:06:58.744 "get_zone_info": false, 00:06:58.744 "zone_management": false, 00:06:58.744 "zone_append": false, 00:06:58.744 "compare": false, 00:06:58.744 "compare_and_write": false, 00:06:58.744 "abort": true, 00:06:58.744 "seek_hole": false, 00:06:58.744 "seek_data": false, 00:06:58.744 "copy": true, 00:06:58.744 "nvme_iov_md": false 00:06:58.744 }, 00:06:58.744 "memory_domains": [ 00:06:58.744 { 00:06:58.744 "dma_device_id": "system", 00:06:58.744 "dma_device_type": 1 00:06:58.744 }, 00:06:58.744 { 00:06:58.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.744 "dma_device_type": 2 00:06:58.744 } 00:06:58.744 ], 00:06:58.744 "driver_specific": {} 00:06:58.744 } 00:06:58.744 ]' 00:06:58.744 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 [2024-11-28 11:37:28.915234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:59.002 [2024-11-28 11:37:28.915337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.002 [2024-11-28 11:37:28.915360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf02c20 00:06:59.002 [2024-11-28 11:37:28.915370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.002 [2024-11-28 11:37:28.916715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.002 [2024-11-28 11:37:28.916749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:59.002 Passthru0 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:59.002 { 00:06:59.002 "name": "Malloc2", 00:06:59.002 "aliases": [ 00:06:59.002 "7045a1d6-77c9-40ec-a67f-aa59e0a9e728" 00:06:59.002 ], 00:06:59.002 "product_name": "Malloc disk", 00:06:59.002 "block_size": 512, 00:06:59.002 "num_blocks": 16384, 00:06:59.002 "uuid": "7045a1d6-77c9-40ec-a67f-aa59e0a9e728", 00:06:59.002 "assigned_rate_limits": { 00:06:59.002 "rw_ios_per_sec": 0, 00:06:59.002 "rw_mbytes_per_sec": 0, 00:06:59.002 "r_mbytes_per_sec": 0, 00:06:59.002 "w_mbytes_per_sec": 0 00:06:59.002 }, 00:06:59.002 "claimed": true, 00:06:59.002 "claim_type": "exclusive_write", 00:06:59.002 "zoned": false, 00:06:59.002 "supported_io_types": { 00:06:59.002 "read": true, 00:06:59.002 "write": true, 00:06:59.002 "unmap": true, 00:06:59.002 "flush": true, 00:06:59.002 "reset": true, 00:06:59.002 "nvme_admin": false, 00:06:59.002 "nvme_io": false, 00:06:59.002 "nvme_io_md": false, 00:06:59.002 "write_zeroes": true, 00:06:59.002 "zcopy": true, 00:06:59.002 "get_zone_info": false, 00:06:59.002 "zone_management": false, 00:06:59.002 "zone_append": false, 00:06:59.002 "compare": false, 00:06:59.002 "compare_and_write": false, 00:06:59.002 "abort": true, 00:06:59.002 "seek_hole": false, 00:06:59.002 "seek_data": false, 00:06:59.002 "copy": true, 00:06:59.002 "nvme_iov_md": false 00:06:59.002 }, 00:06:59.002 "memory_domains": [ 00:06:59.002 { 00:06:59.002 "dma_device_id": "system", 00:06:59.002 "dma_device_type": 1 00:06:59.002 }, 00:06:59.002 { 00:06:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.002 "dma_device_type": 2 00:06:59.002 } 00:06:59.002 ], 00:06:59.002 "driver_specific": {} 00:06:59.002 }, 00:06:59.002 { 00:06:59.002 "name": "Passthru0", 00:06:59.002 "aliases": [ 00:06:59.002 "e0ccb3ab-84f9-5d65-98d6-9ebff5d6feee" 00:06:59.002 ], 00:06:59.002 "product_name": "passthru", 00:06:59.002 "block_size": 512, 00:06:59.002 "num_blocks": 16384, 00:06:59.002 "uuid": "e0ccb3ab-84f9-5d65-98d6-9ebff5d6feee", 00:06:59.002 "assigned_rate_limits": { 00:06:59.002 "rw_ios_per_sec": 0, 00:06:59.002 "rw_mbytes_per_sec": 0, 00:06:59.002 "r_mbytes_per_sec": 0, 00:06:59.002 "w_mbytes_per_sec": 0 00:06:59.002 }, 00:06:59.002 "claimed": false, 00:06:59.002 "zoned": false, 00:06:59.002 "supported_io_types": { 00:06:59.002 "read": true, 00:06:59.002 "write": true, 00:06:59.002 "unmap": true, 00:06:59.002 "flush": true, 00:06:59.002 "reset": true, 00:06:59.002 "nvme_admin": false, 00:06:59.002 "nvme_io": false, 00:06:59.002 "nvme_io_md": false, 00:06:59.002 "write_zeroes": true, 00:06:59.002 "zcopy": true, 00:06:59.002 "get_zone_info": false, 00:06:59.002 "zone_management": false, 00:06:59.002 "zone_append": false, 00:06:59.002 "compare": false, 00:06:59.002 "compare_and_write": false, 00:06:59.002 "abort": true, 00:06:59.002 "seek_hole": false, 00:06:59.002 "seek_data": false, 00:06:59.002 "copy": true, 00:06:59.002 "nvme_iov_md": false 00:06:59.002 }, 00:06:59.002 "memory_domains": [ 00:06:59.002 { 00:06:59.002 "dma_device_id": "system", 00:06:59.002 "dma_device_type": 1 00:06:59.002 }, 00:06:59.002 { 00:06:59.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.002 "dma_device_type": 2 00:06:59.002 } 00:06:59.002 ], 00:06:59.002 "driver_specific": { 00:06:59.002 "passthru": { 00:06:59.002 "name": "Passthru0", 00:06:59.002 "base_bdev_name": "Malloc2" 00:06:59.002 } 00:06:59.002 } 00:06:59.002 } 00:06:59.002 ]' 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.002 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:59.002 ************************************ 00:06:59.002 END TEST rpc_daemon_integrity 00:06:59.002 ************************************ 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:59.002 00:06:59.002 real 0m0.321s 00:06:59.002 user 0m0.210s 00:06:59.002 sys 0m0.043s 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.002 11:37:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.002 11:37:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:59.002 11:37:29 rpc -- rpc/rpc.sh@84 -- # killprocess 70846 00:06:59.002 11:37:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 70846 ']' 00:06:59.002 11:37:29 rpc -- common/autotest_common.sh@958 -- # kill -0 70846 00:06:59.002 11:37:29 rpc -- common/autotest_common.sh@959 -- # uname 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70846 00:06:59.261 killing process with pid 70846 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70846' 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@973 -- # kill 70846 00:06:59.261 11:37:29 rpc -- common/autotest_common.sh@978 -- # wait 70846 00:06:59.520 00:06:59.520 real 0m2.525s 00:06:59.520 user 0m3.207s 00:06:59.520 sys 0m0.701s 00:06:59.520 11:37:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.520 11:37:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.520 ************************************ 00:06:59.520 END TEST rpc 00:06:59.520 ************************************ 00:06:59.520 11:37:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:59.520 11:37:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.520 11:37:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.520 11:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.520 ************************************ 00:06:59.520 START TEST skip_rpc 00:06:59.520 ************************************ 00:06:59.520 11:37:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:59.780 * Looking for test storage... 00:06:59.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.780 11:37:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.780 --rc genhtml_branch_coverage=1 00:06:59.780 --rc genhtml_function_coverage=1 00:06:59.780 --rc genhtml_legend=1 00:06:59.780 --rc geninfo_all_blocks=1 00:06:59.780 --rc geninfo_unexecuted_blocks=1 00:06:59.780 00:06:59.780 ' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.780 --rc genhtml_branch_coverage=1 00:06:59.780 --rc genhtml_function_coverage=1 00:06:59.780 --rc genhtml_legend=1 00:06:59.780 --rc geninfo_all_blocks=1 00:06:59.780 --rc geninfo_unexecuted_blocks=1 00:06:59.780 00:06:59.780 ' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.780 --rc genhtml_branch_coverage=1 00:06:59.780 --rc genhtml_function_coverage=1 00:06:59.780 --rc genhtml_legend=1 00:06:59.780 --rc geninfo_all_blocks=1 00:06:59.780 --rc geninfo_unexecuted_blocks=1 00:06:59.780 00:06:59.780 ' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.780 --rc genhtml_branch_coverage=1 00:06:59.780 --rc genhtml_function_coverage=1 00:06:59.780 --rc genhtml_legend=1 00:06:59.780 --rc geninfo_all_blocks=1 00:06:59.780 --rc geninfo_unexecuted_blocks=1 00:06:59.780 00:06:59.780 ' 00:06:59.780 11:37:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:59.780 11:37:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:59.780 11:37:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.780 11:37:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.780 ************************************ 00:06:59.780 START TEST skip_rpc 00:06:59.780 ************************************ 00:06:59.780 11:37:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:59.780 11:37:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71045 00:06:59.781 11:37:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:59.781 11:37:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.781 11:37:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:59.781 [2024-11-28 11:37:29.862517] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:59.781 [2024-11-28 11:37:29.862622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71045 ] 00:07:00.039 [2024-11-28 11:37:29.989677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.039 [2024-11-28 11:37:30.016860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.039 [2024-11-28 11:37:30.052719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.039 [2024-11-28 11:37:30.116875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71045 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71045 ']' 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71045 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71045 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71045' 00:07:05.314 killing process with pid 71045 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71045 00:07:05.314 11:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71045 00:07:05.314 00:07:05.314 real 0m5.422s 00:07:05.314 user 0m5.045s 00:07:05.314 sys 0m0.290s 00:07:05.314 ************************************ 00:07:05.314 END TEST skip_rpc 00:07:05.314 ************************************ 00:07:05.314 11:37:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.314 11:37:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 11:37:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:05.314 11:37:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.314 11:37:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.314 11:37:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 ************************************ 00:07:05.314 START TEST skip_rpc_with_json 00:07:05.314 ************************************ 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71131 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71131 00:07:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71131 ']' 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.314 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 [2024-11-28 11:37:35.332834] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:05.314 [2024-11-28 11:37:35.332931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:07:05.573 [2024-11-28 11:37:35.459043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.573 [2024-11-28 11:37:35.485329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.573 [2024-11-28 11:37:35.528173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.573 [2024-11-28 11:37:35.596503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:05.831 [2024-11-28 11:37:35.789915] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:05.831 request: 00:07:05.831 { 00:07:05.831 "trtype": "tcp", 00:07:05.831 "method": "nvmf_get_transports", 00:07:05.831 "req_id": 1 00:07:05.831 } 00:07:05.831 Got JSON-RPC error response 00:07:05.831 response: 00:07:05.831 { 00:07:05.831 "code": -19, 00:07:05.831 "message": "No such device" 00:07:05.831 } 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:05.831 [2024-11-28 11:37:35.802051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.831 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.091 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.091 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:06.091 { 00:07:06.091 "subsystems": [ 00:07:06.091 { 00:07:06.091 "subsystem": "fsdev", 00:07:06.091 "config": [ 00:07:06.091 { 00:07:06.091 "method": "fsdev_set_opts", 00:07:06.091 "params": { 00:07:06.091 "fsdev_io_pool_size": 65535, 00:07:06.091 "fsdev_io_cache_size": 256 00:07:06.091 } 00:07:06.091 } 00:07:06.091 ] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "keyring", 00:07:06.091 "config": [] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "iobuf", 00:07:06.091 "config": [ 00:07:06.091 { 00:07:06.091 "method": "iobuf_set_options", 00:07:06.091 "params": { 00:07:06.091 "small_pool_count": 8192, 00:07:06.091 "large_pool_count": 1024, 00:07:06.091 "small_bufsize": 8192, 00:07:06.091 "large_bufsize": 135168, 00:07:06.091 "enable_numa": false 00:07:06.091 } 00:07:06.091 } 00:07:06.091 ] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "sock", 00:07:06.091 "config": [ 00:07:06.091 { 00:07:06.091 "method": "sock_set_default_impl", 00:07:06.091 "params": { 00:07:06.091 "impl_name": "uring" 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "sock_impl_set_options", 00:07:06.091 "params": { 00:07:06.091 "impl_name": "ssl", 00:07:06.091 "recv_buf_size": 4096, 00:07:06.091 "send_buf_size": 4096, 00:07:06.091 "enable_recv_pipe": true, 00:07:06.091 "enable_quickack": false, 00:07:06.091 "enable_placement_id": 0, 00:07:06.091 "enable_zerocopy_send_server": true, 00:07:06.091 "enable_zerocopy_send_client": false, 00:07:06.091 "zerocopy_threshold": 0, 00:07:06.091 "tls_version": 0, 00:07:06.091 "enable_ktls": false 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "sock_impl_set_options", 00:07:06.091 "params": { 00:07:06.091 "impl_name": "posix", 00:07:06.091 "recv_buf_size": 2097152, 00:07:06.091 "send_buf_size": 2097152, 00:07:06.091 "enable_recv_pipe": true, 00:07:06.091 "enable_quickack": false, 00:07:06.091 "enable_placement_id": 0, 00:07:06.091 "enable_zerocopy_send_server": true, 00:07:06.091 "enable_zerocopy_send_client": false, 00:07:06.091 "zerocopy_threshold": 0, 00:07:06.091 "tls_version": 0, 00:07:06.091 "enable_ktls": false 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "sock_impl_set_options", 00:07:06.091 "params": { 00:07:06.091 "impl_name": "uring", 00:07:06.091 "recv_buf_size": 2097152, 00:07:06.091 "send_buf_size": 2097152, 00:07:06.091 "enable_recv_pipe": true, 00:07:06.091 "enable_quickack": false, 00:07:06.091 "enable_placement_id": 0, 00:07:06.091 "enable_zerocopy_send_server": false, 00:07:06.091 "enable_zerocopy_send_client": false, 00:07:06.091 "zerocopy_threshold": 0, 00:07:06.091 "tls_version": 0, 00:07:06.091 "enable_ktls": false 00:07:06.091 } 00:07:06.091 } 00:07:06.091 ] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "vmd", 00:07:06.091 "config": [] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "accel", 00:07:06.091 "config": [ 00:07:06.091 { 00:07:06.091 "method": "accel_set_options", 00:07:06.091 "params": { 00:07:06.091 "small_cache_size": 128, 00:07:06.091 "large_cache_size": 16, 00:07:06.091 "task_count": 2048, 00:07:06.091 "sequence_count": 2048, 00:07:06.091 "buf_count": 2048 00:07:06.091 } 00:07:06.091 } 00:07:06.091 ] 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "subsystem": "bdev", 00:07:06.091 "config": [ 00:07:06.091 { 00:07:06.091 "method": "bdev_set_options", 00:07:06.091 "params": { 00:07:06.091 "bdev_io_pool_size": 65535, 00:07:06.091 "bdev_io_cache_size": 256, 00:07:06.091 "bdev_auto_examine": true, 00:07:06.091 "iobuf_small_cache_size": 128, 00:07:06.091 "iobuf_large_cache_size": 16 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "bdev_raid_set_options", 00:07:06.091 "params": { 00:07:06.091 "process_window_size_kb": 1024, 00:07:06.091 "process_max_bandwidth_mb_sec": 0 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "bdev_iscsi_set_options", 00:07:06.091 "params": { 00:07:06.091 "timeout_sec": 30 00:07:06.091 } 00:07:06.091 }, 00:07:06.091 { 00:07:06.091 "method": "bdev_nvme_set_options", 00:07:06.091 "params": { 00:07:06.091 "action_on_timeout": "none", 00:07:06.091 "timeout_us": 0, 00:07:06.091 "timeout_admin_us": 0, 00:07:06.091 "keep_alive_timeout_ms": 10000, 00:07:06.091 "arbitration_burst": 0, 00:07:06.091 "low_priority_weight": 0, 00:07:06.091 "medium_priority_weight": 0, 00:07:06.091 "high_priority_weight": 0, 00:07:06.091 "nvme_adminq_poll_period_us": 10000, 00:07:06.091 "nvme_ioq_poll_period_us": 0, 00:07:06.091 "io_queue_requests": 0, 00:07:06.092 "delay_cmd_submit": true, 00:07:06.092 "transport_retry_count": 4, 00:07:06.092 "bdev_retry_count": 3, 00:07:06.092 "transport_ack_timeout": 0, 00:07:06.092 "ctrlr_loss_timeout_sec": 0, 00:07:06.092 "reconnect_delay_sec": 0, 00:07:06.092 "fast_io_fail_timeout_sec": 0, 00:07:06.092 "disable_auto_failback": false, 00:07:06.092 "generate_uuids": false, 00:07:06.092 "transport_tos": 0, 00:07:06.092 "nvme_error_stat": false, 00:07:06.092 "rdma_srq_size": 0, 00:07:06.092 "io_path_stat": false, 00:07:06.092 "allow_accel_sequence": false, 00:07:06.092 "rdma_max_cq_size": 0, 00:07:06.092 "rdma_cm_event_timeout_ms": 0, 00:07:06.092 "dhchap_digests": [ 00:07:06.092 "sha256", 00:07:06.092 "sha384", 00:07:06.092 "sha512" 00:07:06.092 ], 00:07:06.092 "dhchap_dhgroups": [ 00:07:06.092 "null", 00:07:06.092 "ffdhe2048", 00:07:06.092 "ffdhe3072", 00:07:06.092 "ffdhe4096", 00:07:06.092 "ffdhe6144", 00:07:06.092 "ffdhe8192" 00:07:06.092 ] 00:07:06.092 } 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "method": "bdev_nvme_set_hotplug", 00:07:06.092 "params": { 00:07:06.092 "period_us": 100000, 00:07:06.092 "enable": false 00:07:06.092 } 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "method": "bdev_wait_for_examine" 00:07:06.092 } 00:07:06.092 ] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "scsi", 00:07:06.092 "config": null 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "scheduler", 00:07:06.092 "config": [ 00:07:06.092 { 00:07:06.092 "method": "framework_set_scheduler", 00:07:06.092 "params": { 00:07:06.092 "name": "static" 00:07:06.092 } 00:07:06.092 } 00:07:06.092 ] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "vhost_scsi", 00:07:06.092 "config": [] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "vhost_blk", 00:07:06.092 "config": [] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "ublk", 00:07:06.092 "config": [] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "nbd", 00:07:06.092 "config": [] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "nvmf", 00:07:06.092 "config": [ 00:07:06.092 { 00:07:06.092 "method": "nvmf_set_config", 00:07:06.092 "params": { 00:07:06.092 "discovery_filter": "match_any", 00:07:06.092 "admin_cmd_passthru": { 00:07:06.092 "identify_ctrlr": false 00:07:06.092 }, 00:07:06.092 "dhchap_digests": [ 00:07:06.092 "sha256", 00:07:06.092 "sha384", 00:07:06.092 "sha512" 00:07:06.092 ], 00:07:06.092 "dhchap_dhgroups": [ 00:07:06.092 "null", 00:07:06.092 "ffdhe2048", 00:07:06.092 "ffdhe3072", 00:07:06.092 "ffdhe4096", 00:07:06.092 "ffdhe6144", 00:07:06.092 "ffdhe8192" 00:07:06.092 ] 00:07:06.092 } 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "method": "nvmf_set_max_subsystems", 00:07:06.092 "params": { 00:07:06.092 "max_subsystems": 1024 00:07:06.092 } 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "method": "nvmf_set_crdt", 00:07:06.092 "params": { 00:07:06.092 "crdt1": 0, 00:07:06.092 "crdt2": 0, 00:07:06.092 "crdt3": 0 00:07:06.092 } 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "method": "nvmf_create_transport", 00:07:06.092 "params": { 00:07:06.092 "trtype": "TCP", 00:07:06.092 "max_queue_depth": 128, 00:07:06.092 "max_io_qpairs_per_ctrlr": 127, 00:07:06.092 "in_capsule_data_size": 4096, 00:07:06.092 "max_io_size": 131072, 00:07:06.092 "io_unit_size": 131072, 00:07:06.092 "max_aq_depth": 128, 00:07:06.092 "num_shared_buffers": 511, 00:07:06.092 "buf_cache_size": 4294967295, 00:07:06.092 "dif_insert_or_strip": false, 00:07:06.092 "zcopy": false, 00:07:06.092 "c2h_success": true, 00:07:06.092 "sock_priority": 0, 00:07:06.092 "abort_timeout_sec": 1, 00:07:06.092 "ack_timeout": 0, 00:07:06.092 "data_wr_pool_size": 0 00:07:06.092 } 00:07:06.092 } 00:07:06.092 ] 00:07:06.092 }, 00:07:06.092 { 00:07:06.092 "subsystem": "iscsi", 00:07:06.092 "config": [ 00:07:06.092 { 00:07:06.092 "method": "iscsi_set_options", 00:07:06.092 "params": { 00:07:06.092 "node_base": "iqn.2016-06.io.spdk", 00:07:06.092 "max_sessions": 128, 00:07:06.092 "max_connections_per_session": 2, 00:07:06.092 "max_queue_depth": 64, 00:07:06.092 "default_time2wait": 2, 00:07:06.092 "default_time2retain": 20, 00:07:06.092 "first_burst_length": 8192, 00:07:06.092 "immediate_data": true, 00:07:06.092 "allow_duplicated_isid": false, 00:07:06.092 "error_recovery_level": 0, 00:07:06.092 "nop_timeout": 60, 00:07:06.092 "nop_in_interval": 30, 00:07:06.092 "disable_chap": false, 00:07:06.092 "require_chap": false, 00:07:06.092 "mutual_chap": false, 00:07:06.092 "chap_group": 0, 00:07:06.092 "max_large_datain_per_connection": 64, 00:07:06.092 "max_r2t_per_connection": 4, 00:07:06.092 "pdu_pool_size": 36864, 00:07:06.092 "immediate_data_pool_size": 16384, 00:07:06.092 "data_out_pool_size": 2048 00:07:06.092 } 00:07:06.092 } 00:07:06.092 ] 00:07:06.092 } 00:07:06.092 ] 00:07:06.092 } 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71131 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71131 ']' 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71131 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.092 11:37:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71131 00:07:06.092 killing process with pid 71131 00:07:06.092 11:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.092 11:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.092 11:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71131' 00:07:06.092 11:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71131 00:07:06.092 11:37:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71131 00:07:06.352 11:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:06.352 11:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71151 00:07:06.352 11:37:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71151 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71151 ']' 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71151 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71151 00:07:11.626 killing process with pid 71151 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71151' 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71151 00:07:11.626 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71151 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:11.893 00:07:11.893 real 0m6.549s 00:07:11.893 user 0m6.085s 00:07:11.893 sys 0m0.660s 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.893 ************************************ 00:07:11.893 END TEST skip_rpc_with_json 00:07:11.893 ************************************ 00:07:11.893 11:37:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.893 ************************************ 00:07:11.893 START TEST skip_rpc_with_delay 00:07:11.893 ************************************ 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:11.893 [2024-11-28 11:37:41.928066] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.893 00:07:11.893 real 0m0.080s 00:07:11.893 user 0m0.048s 00:07:11.893 sys 0m0.031s 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.893 11:37:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:11.893 ************************************ 00:07:11.893 END TEST skip_rpc_with_delay 00:07:11.893 ************************************ 00:07:11.893 11:37:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:11.893 11:37:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:11.893 11:37:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.893 11:37:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.893 ************************************ 00:07:11.893 START TEST exit_on_failed_rpc_init 00:07:11.893 ************************************ 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71261 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71261 00:07:11.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71261 ']' 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.893 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:12.162 [2024-11-28 11:37:42.066394] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:12.162 [2024-11-28 11:37:42.066491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71261 ] 00:07:12.162 [2024-11-28 11:37:42.187155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.162 [2024-11-28 11:37:42.216230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.162 [2024-11-28 11:37:42.271169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.421 [2024-11-28 11:37:42.344839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:12.681 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:12.681 [2024-11-28 11:37:42.631276] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:12.681 [2024-11-28 11:37:42.631393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71271 ] 00:07:12.681 [2024-11-28 11:37:42.757661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.681 [2024-11-28 11:37:42.782077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.941 [2024-11-28 11:37:42.821483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.941 [2024-11-28 11:37:42.821811] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:12.941 [2024-11-28 11:37:42.821831] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:12.941 [2024-11-28 11:37:42.821840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71261 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71261 ']' 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71261 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71261 00:07:12.941 killing process with pid 71261 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71261' 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71261 00:07:12.941 11:37:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71261 00:07:13.200 00:07:13.200 real 0m1.281s 00:07:13.200 user 0m1.340s 00:07:13.200 sys 0m0.396s 00:07:13.200 11:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.200 11:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:13.200 ************************************ 00:07:13.200 END TEST exit_on_failed_rpc_init 00:07:13.200 ************************************ 00:07:13.459 11:37:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:13.459 00:07:13.459 real 0m13.747s 00:07:13.459 user 0m12.703s 00:07:13.459 sys 0m1.591s 00:07:13.459 ************************************ 00:07:13.459 END TEST skip_rpc 00:07:13.459 ************************************ 00:07:13.459 11:37:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.459 11:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.459 11:37:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:13.459 11:37:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.459 11:37:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.459 11:37:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.459 ************************************ 00:07:13.459 START TEST rpc_client 00:07:13.459 ************************************ 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:13.459 * Looking for test storage... 00:07:13.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.459 11:37:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.459 --rc genhtml_branch_coverage=1 00:07:13.459 --rc genhtml_function_coverage=1 00:07:13.459 --rc genhtml_legend=1 00:07:13.459 --rc geninfo_all_blocks=1 00:07:13.459 --rc geninfo_unexecuted_blocks=1 00:07:13.459 00:07:13.459 ' 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.459 --rc genhtml_branch_coverage=1 00:07:13.459 --rc genhtml_function_coverage=1 00:07:13.459 --rc genhtml_legend=1 00:07:13.459 --rc geninfo_all_blocks=1 00:07:13.459 --rc geninfo_unexecuted_blocks=1 00:07:13.459 00:07:13.459 ' 00:07:13.459 11:37:43 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.459 --rc genhtml_branch_coverage=1 00:07:13.459 --rc genhtml_function_coverage=1 00:07:13.460 --rc genhtml_legend=1 00:07:13.460 --rc geninfo_all_blocks=1 00:07:13.460 --rc geninfo_unexecuted_blocks=1 00:07:13.460 00:07:13.460 ' 00:07:13.460 11:37:43 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.460 --rc genhtml_branch_coverage=1 00:07:13.460 --rc genhtml_function_coverage=1 00:07:13.460 --rc genhtml_legend=1 00:07:13.460 --rc geninfo_all_blocks=1 00:07:13.460 --rc geninfo_unexecuted_blocks=1 00:07:13.460 00:07:13.460 ' 00:07:13.460 11:37:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:13.718 OK 00:07:13.718 11:37:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:13.718 00:07:13.718 real 0m0.225s 00:07:13.718 user 0m0.142s 00:07:13.718 sys 0m0.091s 00:07:13.718 11:37:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.718 11:37:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:13.718 ************************************ 00:07:13.718 END TEST rpc_client 00:07:13.718 ************************************ 00:07:13.718 11:37:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:13.718 11:37:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.718 11:37:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.718 11:37:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.718 ************************************ 00:07:13.718 START TEST json_config 00:07:13.718 ************************************ 00:07:13.718 11:37:43 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:13.718 11:37:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.718 11:37:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.718 11:37:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.718 11:37:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.718 11:37:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.718 11:37:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.718 11:37:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.718 11:37:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.718 11:37:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.718 11:37:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:13.718 11:37:43 json_config -- scripts/common.sh@345 -- # : 1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.718 11:37:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.718 11:37:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@353 -- # local d=1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.718 11:37:43 json_config -- scripts/common.sh@355 -- # echo 1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.718 11:37:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@353 -- # local d=2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.718 11:37:43 json_config -- scripts/common.sh@355 -- # echo 2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.718 11:37:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.718 11:37:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.718 11:37:43 json_config -- scripts/common.sh@368 -- # return 0 00:07:13.719 11:37:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.719 11:37:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.719 --rc genhtml_branch_coverage=1 00:07:13.719 --rc genhtml_function_coverage=1 00:07:13.719 --rc genhtml_legend=1 00:07:13.719 --rc geninfo_all_blocks=1 00:07:13.719 --rc geninfo_unexecuted_blocks=1 00:07:13.719 00:07:13.719 ' 00:07:13.719 11:37:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.719 --rc genhtml_branch_coverage=1 00:07:13.719 --rc genhtml_function_coverage=1 00:07:13.719 --rc genhtml_legend=1 00:07:13.719 --rc geninfo_all_blocks=1 00:07:13.719 --rc geninfo_unexecuted_blocks=1 00:07:13.719 00:07:13.719 ' 00:07:13.719 11:37:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.719 --rc genhtml_branch_coverage=1 00:07:13.719 --rc genhtml_function_coverage=1 00:07:13.719 --rc genhtml_legend=1 00:07:13.719 --rc geninfo_all_blocks=1 00:07:13.719 --rc geninfo_unexecuted_blocks=1 00:07:13.719 00:07:13.719 ' 00:07:13.719 11:37:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.719 --rc genhtml_branch_coverage=1 00:07:13.719 --rc genhtml_function_coverage=1 00:07:13.719 --rc genhtml_legend=1 00:07:13.719 --rc geninfo_all_blocks=1 00:07:13.719 --rc geninfo_unexecuted_blocks=1 00:07:13.719 00:07:13.719 ' 00:07:13.719 11:37:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.719 11:37:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.978 11:37:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.978 11:37:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.978 11:37:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.978 11:37:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.978 11:37:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.978 11:37:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.978 11:37:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.978 11:37:43 json_config -- paths/export.sh@5 -- # export PATH 00:07:13.978 11:37:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@51 -- # : 0 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.978 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.978 11:37:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:13.978 11:37:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:13.979 INFO: JSON configuration test init 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.979 11:37:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:13.979 11:37:43 json_config -- json_config/common.sh@9 -- # local app=target 00:07:13.979 11:37:43 json_config -- json_config/common.sh@10 -- # shift 00:07:13.979 11:37:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:13.979 11:37:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:13.979 11:37:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:13.979 11:37:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.979 11:37:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.979 11:37:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71411 00:07:13.979 11:37:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:13.979 Waiting for target to run... 00:07:13.979 11:37:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:13.979 11:37:43 json_config -- json_config/common.sh@25 -- # waitforlisten 71411 /var/tmp/spdk_tgt.sock 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 71411 ']' 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:13.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.979 11:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.979 [2024-11-28 11:37:43.952016] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:13.979 [2024-11-28 11:37:43.952137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71411 ] 00:07:14.547 [2024-11-28 11:37:44.381770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.547 [2024-11-28 11:37:44.409089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.547 [2024-11-28 11:37:44.440763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.117 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:15.117 11:37:44 json_config -- json_config/common.sh@26 -- # echo '' 00:07:15.117 11:37:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:15.117 11:37:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.117 11:37:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:15.117 11:37:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.117 11:37:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.117 11:37:45 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:15.117 11:37:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:15.117 11:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:15.377 [2024-11-28 11:37:45.342481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:15.636 11:37:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.636 11:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:15.636 11:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:15.636 11:37:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@54 -- # sort 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:15.896 11:37:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.896 11:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:15.896 11:37:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.896 11:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:15.896 11:37:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:15.896 11:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:16.155 MallocForNvmf0 00:07:16.155 11:37:46 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:16.155 11:37:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:16.415 MallocForNvmf1 00:07:16.415 11:37:46 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:16.415 11:37:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:16.674 [2024-11-28 11:37:46.661431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.674 11:37:46 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.674 11:37:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.934 11:37:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:16.934 11:37:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:17.193 11:37:47 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:17.193 11:37:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:17.451 11:37:47 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:17.451 11:37:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:17.711 [2024-11-28 11:37:47.678123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:17.711 11:37:47 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:17.711 11:37:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.711 11:37:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.711 11:37:47 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:17.711 11:37:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.711 11:37:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.711 11:37:47 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:17.711 11:37:47 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:17.711 11:37:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:17.970 MallocBdevForConfigChangeCheck 00:07:17.970 11:37:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:17.970 11:37:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.970 11:37:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.229 11:37:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:18.229 11:37:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.489 INFO: shutting down applications... 00:07:18.489 11:37:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:18.489 11:37:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:18.489 11:37:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:18.489 11:37:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:18.489 11:37:48 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:19.059 Calling clear_iscsi_subsystem 00:07:19.059 Calling clear_nvmf_subsystem 00:07:19.059 Calling clear_nbd_subsystem 00:07:19.059 Calling clear_ublk_subsystem 00:07:19.059 Calling clear_vhost_blk_subsystem 00:07:19.059 Calling clear_vhost_scsi_subsystem 00:07:19.059 Calling clear_bdev_subsystem 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:19.059 11:37:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:19.319 11:37:49 json_config -- json_config/json_config.sh@352 -- # break 00:07:19.319 11:37:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:19.319 11:37:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:19.319 11:37:49 json_config -- json_config/common.sh@31 -- # local app=target 00:07:19.319 11:37:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:19.319 11:37:49 json_config -- json_config/common.sh@35 -- # [[ -n 71411 ]] 00:07:19.319 11:37:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71411 00:07:19.319 11:37:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:19.319 11:37:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:19.319 11:37:49 json_config -- json_config/common.sh@41 -- # kill -0 71411 00:07:19.319 11:37:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:19.889 11:37:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:19.889 11:37:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:19.889 11:37:49 json_config -- json_config/common.sh@41 -- # kill -0 71411 00:07:19.889 11:37:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:19.889 11:37:49 json_config -- json_config/common.sh@43 -- # break 00:07:19.889 11:37:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:19.889 SPDK target shutdown done 00:07:19.889 11:37:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:19.889 INFO: relaunching applications... 00:07:19.889 11:37:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:19.889 11:37:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:19.889 11:37:49 json_config -- json_config/common.sh@9 -- # local app=target 00:07:19.889 11:37:49 json_config -- json_config/common.sh@10 -- # shift 00:07:19.889 11:37:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:19.889 11:37:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:19.889 11:37:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:19.889 11:37:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.889 11:37:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.889 11:37:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71606 00:07:19.889 11:37:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:19.889 Waiting for target to run... 00:07:19.889 11:37:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:19.889 11:37:49 json_config -- json_config/common.sh@25 -- # waitforlisten 71606 /var/tmp/spdk_tgt.sock 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 71606 ']' 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.890 11:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.890 [2024-11-28 11:37:49.940053] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:19.890 [2024-11-28 11:37:49.940158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71606 ] 00:07:20.459 [2024-11-28 11:37:50.343150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.459 [2024-11-28 11:37:50.373828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.459 [2024-11-28 11:37:50.415376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.459 [2024-11-28 11:37:50.552169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.717 [2024-11-28 11:37:50.765496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.717 [2024-11-28 11:37:50.797593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:20.977 00:07:20.977 INFO: Checking if target configuration is the same... 00:07:20.977 11:37:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.977 11:37:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:20.977 11:37:50 json_config -- json_config/common.sh@26 -- # echo '' 00:07:20.977 11:37:50 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:20.977 11:37:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:20.977 11:37:50 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:20.977 11:37:50 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:20.977 11:37:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:20.977 + '[' 2 -ne 2 ']' 00:07:20.977 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:20.977 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:20.977 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:20.977 +++ basename /dev/fd/62 00:07:20.977 ++ mktemp /tmp/62.XXX 00:07:20.977 + tmp_file_1=/tmp/62.7KH 00:07:20.977 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:20.977 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:20.977 + tmp_file_2=/tmp/spdk_tgt_config.json.31s 00:07:20.977 + ret=0 00:07:20.977 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:21.544 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:21.544 + diff -u /tmp/62.7KH /tmp/spdk_tgt_config.json.31s 00:07:21.544 INFO: JSON config files are the same 00:07:21.544 + echo 'INFO: JSON config files are the same' 00:07:21.544 + rm /tmp/62.7KH /tmp/spdk_tgt_config.json.31s 00:07:21.544 + exit 0 00:07:21.544 INFO: changing configuration and checking if this can be detected... 00:07:21.544 11:37:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:21.544 11:37:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:21.544 11:37:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:21.544 11:37:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:21.803 11:37:51 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:21.803 11:37:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:21.803 11:37:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:21.803 + '[' 2 -ne 2 ']' 00:07:21.803 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:21.803 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:21.803 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:21.803 +++ basename /dev/fd/62 00:07:21.803 ++ mktemp /tmp/62.XXX 00:07:21.803 + tmp_file_1=/tmp/62.k0J 00:07:21.804 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:21.804 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:21.804 + tmp_file_2=/tmp/spdk_tgt_config.json.J5T 00:07:21.804 + ret=0 00:07:21.804 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:22.063 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:22.323 + diff -u /tmp/62.k0J /tmp/spdk_tgt_config.json.J5T 00:07:22.323 + ret=1 00:07:22.323 + echo '=== Start of file: /tmp/62.k0J ===' 00:07:22.323 + cat /tmp/62.k0J 00:07:22.323 + echo '=== End of file: /tmp/62.k0J ===' 00:07:22.323 + echo '' 00:07:22.323 + echo '=== Start of file: /tmp/spdk_tgt_config.json.J5T ===' 00:07:22.323 + cat /tmp/spdk_tgt_config.json.J5T 00:07:22.323 + echo '=== End of file: /tmp/spdk_tgt_config.json.J5T ===' 00:07:22.323 + echo '' 00:07:22.323 + rm /tmp/62.k0J /tmp/spdk_tgt_config.json.J5T 00:07:22.323 + exit 1 00:07:22.323 INFO: configuration change detected. 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 71606 ]] 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.323 11:37:52 json_config -- json_config/json_config.sh@330 -- # killprocess 71606 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 71606 ']' 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@958 -- # kill -0 71606 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@959 -- # uname 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71606 00:07:22.323 killing process with pid 71606 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71606' 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@973 -- # kill 71606 00:07:22.323 11:37:52 json_config -- common/autotest_common.sh@978 -- # wait 71606 00:07:22.583 11:37:52 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:22.583 11:37:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:22.583 11:37:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.583 11:37:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.583 INFO: Success 00:07:22.583 11:37:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:22.583 11:37:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:22.583 ************************************ 00:07:22.583 END TEST json_config 00:07:22.583 ************************************ 00:07:22.583 00:07:22.583 real 0m8.938s 00:07:22.583 user 0m12.767s 00:07:22.583 sys 0m1.911s 00:07:22.583 11:37:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.583 11:37:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.583 11:37:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:22.583 11:37:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.583 11:37:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.583 11:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.583 ************************************ 00:07:22.583 START TEST json_config_extra_key 00:07:22.583 ************************************ 00:07:22.583 11:37:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:22.583 11:37:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.845 --rc genhtml_branch_coverage=1 00:07:22.845 --rc genhtml_function_coverage=1 00:07:22.845 --rc genhtml_legend=1 00:07:22.845 --rc geninfo_all_blocks=1 00:07:22.845 --rc geninfo_unexecuted_blocks=1 00:07:22.845 00:07:22.845 ' 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.845 --rc genhtml_branch_coverage=1 00:07:22.845 --rc genhtml_function_coverage=1 00:07:22.845 --rc genhtml_legend=1 00:07:22.845 --rc geninfo_all_blocks=1 00:07:22.845 --rc geninfo_unexecuted_blocks=1 00:07:22.845 00:07:22.845 ' 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.845 --rc genhtml_branch_coverage=1 00:07:22.845 --rc genhtml_function_coverage=1 00:07:22.845 --rc genhtml_legend=1 00:07:22.845 --rc geninfo_all_blocks=1 00:07:22.845 --rc geninfo_unexecuted_blocks=1 00:07:22.845 00:07:22.845 ' 00:07:22.845 11:37:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.845 --rc genhtml_branch_coverage=1 00:07:22.845 --rc genhtml_function_coverage=1 00:07:22.845 --rc genhtml_legend=1 00:07:22.845 --rc geninfo_all_blocks=1 00:07:22.845 --rc geninfo_unexecuted_blocks=1 00:07:22.845 00:07:22.845 ' 00:07:22.845 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.845 11:37:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.845 11:37:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.845 11:37:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.845 11:37:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.845 11:37:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.845 11:37:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:22.846 11:37:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.846 11:37:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:22.846 INFO: launching applications... 00:07:22.846 11:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71760 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:22.846 Waiting for target to run... 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:22.846 11:37:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71760 /var/tmp/spdk_tgt.sock 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71760 ']' 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:22.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.846 11:37:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:22.846 [2024-11-28 11:37:52.924755] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:22.846 [2024-11-28 11:37:52.925098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71760 ] 00:07:23.415 [2024-11-28 11:37:53.347823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.415 [2024-11-28 11:37:53.376158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.416 [2024-11-28 11:37:53.412813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.416 [2024-11-28 11:37:53.444007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.983 00:07:23.983 INFO: shutting down applications... 00:07:23.983 11:37:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.983 11:37:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:23.983 11:37:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:23.983 11:37:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71760 ]] 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71760 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71760 00:07:23.983 11:37:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:24.551 11:37:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:24.551 11:37:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:24.552 11:37:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71760 00:07:24.552 11:37:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:24.552 11:37:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:24.552 11:37:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:24.552 11:37:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:24.552 SPDK target shutdown done 00:07:24.552 11:37:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:24.552 Success 00:07:24.552 00:07:24.552 real 0m1.851s 00:07:24.552 user 0m1.784s 00:07:24.552 sys 0m0.466s 00:07:24.552 11:37:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.552 ************************************ 00:07:24.552 END TEST json_config_extra_key 00:07:24.552 ************************************ 00:07:24.552 11:37:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:24.552 11:37:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:24.552 11:37:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.552 11:37:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.552 11:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.552 ************************************ 00:07:24.552 START TEST alias_rpc 00:07:24.552 ************************************ 00:07:24.552 11:37:54 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:24.552 * Looking for test storage... 00:07:24.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:24.552 11:37:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.552 11:37:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.552 11:37:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.811 11:37:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.811 --rc genhtml_branch_coverage=1 00:07:24.811 --rc genhtml_function_coverage=1 00:07:24.811 --rc genhtml_legend=1 00:07:24.811 --rc geninfo_all_blocks=1 00:07:24.811 --rc geninfo_unexecuted_blocks=1 00:07:24.811 00:07:24.811 ' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.811 --rc genhtml_branch_coverage=1 00:07:24.811 --rc genhtml_function_coverage=1 00:07:24.811 --rc genhtml_legend=1 00:07:24.811 --rc geninfo_all_blocks=1 00:07:24.811 --rc geninfo_unexecuted_blocks=1 00:07:24.811 00:07:24.811 ' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.811 --rc genhtml_branch_coverage=1 00:07:24.811 --rc genhtml_function_coverage=1 00:07:24.811 --rc genhtml_legend=1 00:07:24.811 --rc geninfo_all_blocks=1 00:07:24.811 --rc geninfo_unexecuted_blocks=1 00:07:24.811 00:07:24.811 ' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.811 --rc genhtml_branch_coverage=1 00:07:24.811 --rc genhtml_function_coverage=1 00:07:24.811 --rc genhtml_legend=1 00:07:24.811 --rc geninfo_all_blocks=1 00:07:24.811 --rc geninfo_unexecuted_blocks=1 00:07:24.811 00:07:24.811 ' 00:07:24.811 11:37:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:24.811 11:37:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71838 00:07:24.811 11:37:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:24.811 11:37:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71838 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71838 ']' 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.811 11:37:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.811 [2024-11-28 11:37:54.862913] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:24.811 [2024-11-28 11:37:54.863462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71838 ] 00:07:25.070 [2024-11-28 11:37:54.991837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.070 [2024-11-28 11:37:55.025103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.070 [2024-11-28 11:37:55.065944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.070 [2024-11-28 11:37:55.133670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.006 11:37:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.006 11:37:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.006 11:37:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:26.267 11:37:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71838 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71838 ']' 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71838 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71838 00:07:26.267 killing process with pid 71838 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71838' 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 71838 00:07:26.267 11:37:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 71838 00:07:26.525 ************************************ 00:07:26.525 END TEST alias_rpc 00:07:26.525 ************************************ 00:07:26.525 00:07:26.525 real 0m2.054s 00:07:26.525 user 0m2.362s 00:07:26.525 sys 0m0.501s 00:07:26.525 11:37:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.525 11:37:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.789 11:37:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:26.789 11:37:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:26.789 11:37:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.789 11:37:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.789 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.789 ************************************ 00:07:26.789 START TEST spdkcli_tcp 00:07:26.789 ************************************ 00:07:26.789 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:26.789 * Looking for test storage... 00:07:26.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:26.789 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.789 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.789 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.789 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.789 11:37:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.790 --rc genhtml_branch_coverage=1 00:07:26.790 --rc genhtml_function_coverage=1 00:07:26.790 --rc genhtml_legend=1 00:07:26.790 --rc geninfo_all_blocks=1 00:07:26.790 --rc geninfo_unexecuted_blocks=1 00:07:26.790 00:07:26.790 ' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.790 --rc genhtml_branch_coverage=1 00:07:26.790 --rc genhtml_function_coverage=1 00:07:26.790 --rc genhtml_legend=1 00:07:26.790 --rc geninfo_all_blocks=1 00:07:26.790 --rc geninfo_unexecuted_blocks=1 00:07:26.790 00:07:26.790 ' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.790 --rc genhtml_branch_coverage=1 00:07:26.790 --rc genhtml_function_coverage=1 00:07:26.790 --rc genhtml_legend=1 00:07:26.790 --rc geninfo_all_blocks=1 00:07:26.790 --rc geninfo_unexecuted_blocks=1 00:07:26.790 00:07:26.790 ' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.790 --rc genhtml_branch_coverage=1 00:07:26.790 --rc genhtml_function_coverage=1 00:07:26.790 --rc genhtml_legend=1 00:07:26.790 --rc geninfo_all_blocks=1 00:07:26.790 --rc geninfo_unexecuted_blocks=1 00:07:26.790 00:07:26.790 ' 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71922 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71922 00:07:26.790 11:37:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71922 ']' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.790 11:37:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.049 [2024-11-28 11:37:56.931229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:27.049 [2024-11-28 11:37:56.931566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71922 ] 00:07:27.049 [2024-11-28 11:37:57.053741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.049 [2024-11-28 11:37:57.079557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.049 [2024-11-28 11:37:57.130016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.049 [2024-11-28 11:37:57.130023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.308 [2024-11-28 11:37:57.200475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.308 11:37:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.308 11:37:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:27.308 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71932 00:07:27.308 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:27.308 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:27.567 [ 00:07:27.567 "bdev_malloc_delete", 00:07:27.567 "bdev_malloc_create", 00:07:27.567 "bdev_null_resize", 00:07:27.567 "bdev_null_delete", 00:07:27.567 "bdev_null_create", 00:07:27.567 "bdev_nvme_cuse_unregister", 00:07:27.567 "bdev_nvme_cuse_register", 00:07:27.567 "bdev_opal_new_user", 00:07:27.567 "bdev_opal_set_lock_state", 00:07:27.567 "bdev_opal_delete", 00:07:27.567 "bdev_opal_get_info", 00:07:27.567 "bdev_opal_create", 00:07:27.567 "bdev_nvme_opal_revert", 00:07:27.567 "bdev_nvme_opal_init", 00:07:27.567 "bdev_nvme_send_cmd", 00:07:27.567 "bdev_nvme_set_keys", 00:07:27.567 "bdev_nvme_get_path_iostat", 00:07:27.567 "bdev_nvme_get_mdns_discovery_info", 00:07:27.567 "bdev_nvme_stop_mdns_discovery", 00:07:27.567 "bdev_nvme_start_mdns_discovery", 00:07:27.567 "bdev_nvme_set_multipath_policy", 00:07:27.567 "bdev_nvme_set_preferred_path", 00:07:27.567 "bdev_nvme_get_io_paths", 00:07:27.567 "bdev_nvme_remove_error_injection", 00:07:27.567 "bdev_nvme_add_error_injection", 00:07:27.567 "bdev_nvme_get_discovery_info", 00:07:27.567 "bdev_nvme_stop_discovery", 00:07:27.567 "bdev_nvme_start_discovery", 00:07:27.567 "bdev_nvme_get_controller_health_info", 00:07:27.567 "bdev_nvme_disable_controller", 00:07:27.567 "bdev_nvme_enable_controller", 00:07:27.567 "bdev_nvme_reset_controller", 00:07:27.567 "bdev_nvme_get_transport_statistics", 00:07:27.567 "bdev_nvme_apply_firmware", 00:07:27.567 "bdev_nvme_detach_controller", 00:07:27.567 "bdev_nvme_get_controllers", 00:07:27.567 "bdev_nvme_attach_controller", 00:07:27.567 "bdev_nvme_set_hotplug", 00:07:27.567 "bdev_nvme_set_options", 00:07:27.567 "bdev_passthru_delete", 00:07:27.567 "bdev_passthru_create", 00:07:27.567 "bdev_lvol_set_parent_bdev", 00:07:27.567 "bdev_lvol_set_parent", 00:07:27.567 "bdev_lvol_check_shallow_copy", 00:07:27.567 "bdev_lvol_start_shallow_copy", 00:07:27.567 "bdev_lvol_grow_lvstore", 00:07:27.567 "bdev_lvol_get_lvols", 00:07:27.567 "bdev_lvol_get_lvstores", 00:07:27.567 "bdev_lvol_delete", 00:07:27.567 "bdev_lvol_set_read_only", 00:07:27.567 "bdev_lvol_resize", 00:07:27.567 "bdev_lvol_decouple_parent", 00:07:27.567 "bdev_lvol_inflate", 00:07:27.567 "bdev_lvol_rename", 00:07:27.567 "bdev_lvol_clone_bdev", 00:07:27.567 "bdev_lvol_clone", 00:07:27.567 "bdev_lvol_snapshot", 00:07:27.567 "bdev_lvol_create", 00:07:27.567 "bdev_lvol_delete_lvstore", 00:07:27.567 "bdev_lvol_rename_lvstore", 00:07:27.567 "bdev_lvol_create_lvstore", 00:07:27.567 "bdev_raid_set_options", 00:07:27.567 "bdev_raid_remove_base_bdev", 00:07:27.567 "bdev_raid_add_base_bdev", 00:07:27.567 "bdev_raid_delete", 00:07:27.567 "bdev_raid_create", 00:07:27.567 "bdev_raid_get_bdevs", 00:07:27.567 "bdev_error_inject_error", 00:07:27.567 "bdev_error_delete", 00:07:27.567 "bdev_error_create", 00:07:27.567 "bdev_split_delete", 00:07:27.567 "bdev_split_create", 00:07:27.567 "bdev_delay_delete", 00:07:27.567 "bdev_delay_create", 00:07:27.567 "bdev_delay_update_latency", 00:07:27.567 "bdev_zone_block_delete", 00:07:27.567 "bdev_zone_block_create", 00:07:27.567 "blobfs_create", 00:07:27.567 "blobfs_detect", 00:07:27.567 "blobfs_set_cache_size", 00:07:27.567 "bdev_aio_delete", 00:07:27.567 "bdev_aio_rescan", 00:07:27.567 "bdev_aio_create", 00:07:27.567 "bdev_ftl_set_property", 00:07:27.567 "bdev_ftl_get_properties", 00:07:27.567 "bdev_ftl_get_stats", 00:07:27.567 "bdev_ftl_unmap", 00:07:27.567 "bdev_ftl_unload", 00:07:27.567 "bdev_ftl_delete", 00:07:27.567 "bdev_ftl_load", 00:07:27.567 "bdev_ftl_create", 00:07:27.567 "bdev_virtio_attach_controller", 00:07:27.567 "bdev_virtio_scsi_get_devices", 00:07:27.567 "bdev_virtio_detach_controller", 00:07:27.568 "bdev_virtio_blk_set_hotplug", 00:07:27.568 "bdev_iscsi_delete", 00:07:27.568 "bdev_iscsi_create", 00:07:27.568 "bdev_iscsi_set_options", 00:07:27.568 "bdev_uring_delete", 00:07:27.568 "bdev_uring_rescan", 00:07:27.568 "bdev_uring_create", 00:07:27.568 "accel_error_inject_error", 00:07:27.568 "ioat_scan_accel_module", 00:07:27.568 "dsa_scan_accel_module", 00:07:27.568 "iaa_scan_accel_module", 00:07:27.568 "keyring_file_remove_key", 00:07:27.568 "keyring_file_add_key", 00:07:27.568 "keyring_linux_set_options", 00:07:27.568 "fsdev_aio_delete", 00:07:27.568 "fsdev_aio_create", 00:07:27.568 "iscsi_get_histogram", 00:07:27.568 "iscsi_enable_histogram", 00:07:27.568 "iscsi_set_options", 00:07:27.568 "iscsi_get_auth_groups", 00:07:27.568 "iscsi_auth_group_remove_secret", 00:07:27.568 "iscsi_auth_group_add_secret", 00:07:27.568 "iscsi_delete_auth_group", 00:07:27.568 "iscsi_create_auth_group", 00:07:27.568 "iscsi_set_discovery_auth", 00:07:27.568 "iscsi_get_options", 00:07:27.568 "iscsi_target_node_request_logout", 00:07:27.568 "iscsi_target_node_set_redirect", 00:07:27.568 "iscsi_target_node_set_auth", 00:07:27.568 "iscsi_target_node_add_lun", 00:07:27.568 "iscsi_get_stats", 00:07:27.568 "iscsi_get_connections", 00:07:27.568 "iscsi_portal_group_set_auth", 00:07:27.568 "iscsi_start_portal_group", 00:07:27.568 "iscsi_delete_portal_group", 00:07:27.568 "iscsi_create_portal_group", 00:07:27.568 "iscsi_get_portal_groups", 00:07:27.568 "iscsi_delete_target_node", 00:07:27.568 "iscsi_target_node_remove_pg_ig_maps", 00:07:27.568 "iscsi_target_node_add_pg_ig_maps", 00:07:27.568 "iscsi_create_target_node", 00:07:27.568 "iscsi_get_target_nodes", 00:07:27.568 "iscsi_delete_initiator_group", 00:07:27.568 "iscsi_initiator_group_remove_initiators", 00:07:27.568 "iscsi_initiator_group_add_initiators", 00:07:27.568 "iscsi_create_initiator_group", 00:07:27.568 "iscsi_get_initiator_groups", 00:07:27.568 "nvmf_set_crdt", 00:07:27.568 "nvmf_set_config", 00:07:27.568 "nvmf_set_max_subsystems", 00:07:27.568 "nvmf_stop_mdns_prr", 00:07:27.568 "nvmf_publish_mdns_prr", 00:07:27.568 "nvmf_subsystem_get_listeners", 00:07:27.568 "nvmf_subsystem_get_qpairs", 00:07:27.568 "nvmf_subsystem_get_controllers", 00:07:27.568 "nvmf_get_stats", 00:07:27.568 "nvmf_get_transports", 00:07:27.568 "nvmf_create_transport", 00:07:27.568 "nvmf_get_targets", 00:07:27.568 "nvmf_delete_target", 00:07:27.568 "nvmf_create_target", 00:07:27.568 "nvmf_subsystem_allow_any_host", 00:07:27.568 "nvmf_subsystem_set_keys", 00:07:27.568 "nvmf_subsystem_remove_host", 00:07:27.568 "nvmf_subsystem_add_host", 00:07:27.568 "nvmf_ns_remove_host", 00:07:27.568 "nvmf_ns_add_host", 00:07:27.568 "nvmf_subsystem_remove_ns", 00:07:27.568 "nvmf_subsystem_set_ns_ana_group", 00:07:27.568 "nvmf_subsystem_add_ns", 00:07:27.568 "nvmf_subsystem_listener_set_ana_state", 00:07:27.568 "nvmf_discovery_get_referrals", 00:07:27.568 "nvmf_discovery_remove_referral", 00:07:27.568 "nvmf_discovery_add_referral", 00:07:27.568 "nvmf_subsystem_remove_listener", 00:07:27.568 "nvmf_subsystem_add_listener", 00:07:27.568 "nvmf_delete_subsystem", 00:07:27.568 "nvmf_create_subsystem", 00:07:27.568 "nvmf_get_subsystems", 00:07:27.568 "env_dpdk_get_mem_stats", 00:07:27.568 "nbd_get_disks", 00:07:27.568 "nbd_stop_disk", 00:07:27.568 "nbd_start_disk", 00:07:27.568 "ublk_recover_disk", 00:07:27.568 "ublk_get_disks", 00:07:27.568 "ublk_stop_disk", 00:07:27.568 "ublk_start_disk", 00:07:27.568 "ublk_destroy_target", 00:07:27.568 "ublk_create_target", 00:07:27.568 "virtio_blk_create_transport", 00:07:27.568 "virtio_blk_get_transports", 00:07:27.568 "vhost_controller_set_coalescing", 00:07:27.568 "vhost_get_controllers", 00:07:27.568 "vhost_delete_controller", 00:07:27.568 "vhost_create_blk_controller", 00:07:27.568 "vhost_scsi_controller_remove_target", 00:07:27.568 "vhost_scsi_controller_add_target", 00:07:27.568 "vhost_start_scsi_controller", 00:07:27.568 "vhost_create_scsi_controller", 00:07:27.568 "thread_set_cpumask", 00:07:27.568 "scheduler_set_options", 00:07:27.568 "framework_get_governor", 00:07:27.568 "framework_get_scheduler", 00:07:27.568 "framework_set_scheduler", 00:07:27.568 "framework_get_reactors", 00:07:27.568 "thread_get_io_channels", 00:07:27.568 "thread_get_pollers", 00:07:27.568 "thread_get_stats", 00:07:27.568 "framework_monitor_context_switch", 00:07:27.568 "spdk_kill_instance", 00:07:27.568 "log_enable_timestamps", 00:07:27.568 "log_get_flags", 00:07:27.568 "log_clear_flag", 00:07:27.568 "log_set_flag", 00:07:27.568 "log_get_level", 00:07:27.568 "log_set_level", 00:07:27.568 "log_get_print_level", 00:07:27.568 "log_set_print_level", 00:07:27.568 "framework_enable_cpumask_locks", 00:07:27.568 "framework_disable_cpumask_locks", 00:07:27.568 "framework_wait_init", 00:07:27.568 "framework_start_init", 00:07:27.568 "scsi_get_devices", 00:07:27.568 "bdev_get_histogram", 00:07:27.568 "bdev_enable_histogram", 00:07:27.568 "bdev_set_qos_limit", 00:07:27.568 "bdev_set_qd_sampling_period", 00:07:27.568 "bdev_get_bdevs", 00:07:27.568 "bdev_reset_iostat", 00:07:27.568 "bdev_get_iostat", 00:07:27.568 "bdev_examine", 00:07:27.568 "bdev_wait_for_examine", 00:07:27.568 "bdev_set_options", 00:07:27.568 "accel_get_stats", 00:07:27.568 "accel_set_options", 00:07:27.568 "accel_set_driver", 00:07:27.568 "accel_crypto_key_destroy", 00:07:27.568 "accel_crypto_keys_get", 00:07:27.568 "accel_crypto_key_create", 00:07:27.568 "accel_assign_opc", 00:07:27.568 "accel_get_module_info", 00:07:27.568 "accel_get_opc_assignments", 00:07:27.568 "vmd_rescan", 00:07:27.568 "vmd_remove_device", 00:07:27.568 "vmd_enable", 00:07:27.568 "sock_get_default_impl", 00:07:27.568 "sock_set_default_impl", 00:07:27.568 "sock_impl_set_options", 00:07:27.568 "sock_impl_get_options", 00:07:27.568 "iobuf_get_stats", 00:07:27.568 "iobuf_set_options", 00:07:27.568 "keyring_get_keys", 00:07:27.568 "framework_get_pci_devices", 00:07:27.568 "framework_get_config", 00:07:27.568 "framework_get_subsystems", 00:07:27.568 "fsdev_set_opts", 00:07:27.568 "fsdev_get_opts", 00:07:27.568 "trace_get_info", 00:07:27.568 "trace_get_tpoint_group_mask", 00:07:27.568 "trace_disable_tpoint_group", 00:07:27.568 "trace_enable_tpoint_group", 00:07:27.568 "trace_clear_tpoint_mask", 00:07:27.568 "trace_set_tpoint_mask", 00:07:27.568 "notify_get_notifications", 00:07:27.568 "notify_get_types", 00:07:27.568 "spdk_get_version", 00:07:27.568 "rpc_get_methods" 00:07:27.568 ] 00:07:27.568 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:27.568 11:37:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.568 11:37:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.827 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:27.827 11:37:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71922 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71922 ']' 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71922 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71922 00:07:27.827 killing process with pid 71922 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71922' 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71922 00:07:27.827 11:37:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71922 00:07:28.085 ************************************ 00:07:28.085 END TEST spdkcli_tcp 00:07:28.085 ************************************ 00:07:28.085 00:07:28.085 real 0m1.457s 00:07:28.085 user 0m2.462s 00:07:28.085 sys 0m0.482s 00:07:28.085 11:37:58 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.085 11:37:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.085 11:37:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:28.085 11:37:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.085 11:37:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.085 11:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.085 ************************************ 00:07:28.085 START TEST dpdk_mem_utility 00:07:28.085 ************************************ 00:07:28.086 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:28.344 * Looking for test storage... 00:07:28.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:28.344 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.344 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.344 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.344 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:28.344 11:37:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.345 11:37:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.345 --rc genhtml_branch_coverage=1 00:07:28.345 --rc genhtml_function_coverage=1 00:07:28.345 --rc genhtml_legend=1 00:07:28.345 --rc geninfo_all_blocks=1 00:07:28.345 --rc geninfo_unexecuted_blocks=1 00:07:28.345 00:07:28.345 ' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.345 --rc genhtml_branch_coverage=1 00:07:28.345 --rc genhtml_function_coverage=1 00:07:28.345 --rc genhtml_legend=1 00:07:28.345 --rc geninfo_all_blocks=1 00:07:28.345 --rc geninfo_unexecuted_blocks=1 00:07:28.345 00:07:28.345 ' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.345 --rc genhtml_branch_coverage=1 00:07:28.345 --rc genhtml_function_coverage=1 00:07:28.345 --rc genhtml_legend=1 00:07:28.345 --rc geninfo_all_blocks=1 00:07:28.345 --rc geninfo_unexecuted_blocks=1 00:07:28.345 00:07:28.345 ' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.345 --rc genhtml_branch_coverage=1 00:07:28.345 --rc genhtml_function_coverage=1 00:07:28.345 --rc genhtml_legend=1 00:07:28.345 --rc geninfo_all_blocks=1 00:07:28.345 --rc geninfo_unexecuted_blocks=1 00:07:28.345 00:07:28.345 ' 00:07:28.345 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:28.345 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72014 00:07:28.345 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:28.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.345 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72014 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 72014 ']' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.345 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:28.345 [2024-11-28 11:37:58.402243] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:28.345 [2024-11-28 11:37:58.402565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72014 ] 00:07:28.603 [2024-11-28 11:37:58.523878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.603 [2024-11-28 11:37:58.554027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.603 [2024-11-28 11:37:58.604947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.603 [2024-11-28 11:37:58.673352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.862 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.862 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:28.862 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:28.862 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:28.862 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.862 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:28.862 { 00:07:28.862 "filename": "/tmp/spdk_mem_dump.txt" 00:07:28.862 } 00:07:28.862 11:37:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.862 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:28.862 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:28.862 1 heaps totaling size 818.000000 MiB 00:07:28.862 size: 818.000000 MiB heap id: 0 00:07:28.862 end heaps---------- 00:07:28.862 9 mempools totaling size 603.782043 MiB 00:07:28.862 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:28.862 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:28.862 size: 100.555481 MiB name: bdev_io_72014 00:07:28.862 size: 50.003479 MiB name: msgpool_72014 00:07:28.862 size: 36.509338 MiB name: fsdev_io_72014 00:07:28.862 size: 21.763794 MiB name: PDU_Pool 00:07:28.862 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:28.862 size: 4.133484 MiB name: evtpool_72014 00:07:28.862 size: 0.026123 MiB name: Session_Pool 00:07:28.862 end mempools------- 00:07:28.862 6 memzones totaling size 4.142822 MiB 00:07:28.862 size: 1.000366 MiB name: RG_ring_0_72014 00:07:28.862 size: 1.000366 MiB name: RG_ring_1_72014 00:07:28.863 size: 1.000366 MiB name: RG_ring_4_72014 00:07:28.863 size: 1.000366 MiB name: RG_ring_5_72014 00:07:28.863 size: 0.125366 MiB name: RG_ring_2_72014 00:07:28.863 size: 0.015991 MiB name: RG_ring_3_72014 00:07:28.863 end memzones------- 00:07:28.863 11:37:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:29.123 heap id: 0 total size: 818.000000 MiB number of busy elements: 332 number of free elements: 15 00:07:29.123 list of free elements. size: 10.940308 MiB 00:07:29.123 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:29.123 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:29.123 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:29.123 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:29.123 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:29.124 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:29.124 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:29.124 element at address: 0x200000200000 with size: 0.858093 MiB 00:07:29.124 element at address: 0x20001ae00000 with size: 0.564758 MiB 00:07:29.124 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:29.124 element at address: 0x200000c00000 with size: 0.486267 MiB 00:07:29.124 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:29.124 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:29.124 element at address: 0x200028200000 with size: 0.395752 MiB 00:07:29.124 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:29.124 list of standard malloc elements. size: 199.130798 MiB 00:07:29.124 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:29.124 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:29.124 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:29.124 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:29.124 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:29.124 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:29.124 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:29.124 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:29.124 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:29.124 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:29.124 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:29.125 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90940 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90a00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90ac0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90b80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90c40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90d00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90dc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90e80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae90f40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:07:29.125 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:29.126 element at address: 0x200028265500 with size: 0.000183 MiB 00:07:29.126 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c480 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c540 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c600 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c780 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c840 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c900 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d080 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d140 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d200 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d380 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d440 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d500 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d680 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d740 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d800 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826d980 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826da40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826db00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826de00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826df80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e040 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e100 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e280 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e340 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e400 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e580 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e640 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e700 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e880 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826e940 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f000 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f180 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f240 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f300 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f480 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f540 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f600 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f780 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f840 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f900 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:29.126 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:29.126 list of memzone associated elements. size: 607.928894 MiB 00:07:29.127 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:29.127 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:29.127 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:29.127 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:29.127 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:29.127 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_72014_0 00:07:29.127 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:29.127 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72014_0 00:07:29.127 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:29.127 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72014_0 00:07:29.127 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:29.127 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:29.127 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:29.127 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:29.127 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:29.127 associated memzone info: size: 3.000122 MiB name: MP_evtpool_72014_0 00:07:29.127 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:29.127 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72014 00:07:29.127 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:07:29.127 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72014 00:07:29.127 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:29.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:29.127 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:29.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:29.127 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:29.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:29.127 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:29.127 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:29.127 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:29.127 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72014 00:07:29.127 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:29.127 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72014 00:07:29.127 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:29.127 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72014 00:07:29.127 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:29.127 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72014 00:07:29.127 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:29.127 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72014 00:07:29.127 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:29.127 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72014 00:07:29.127 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:29.127 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:29.127 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:29.127 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:29.127 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:29.127 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:29.127 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:07:29.127 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_72014 00:07:29.127 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:29.127 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72014 00:07:29.127 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:29.127 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:29.127 element at address: 0x200028265680 with size: 0.023743 MiB 00:07:29.127 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:29.127 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:29.127 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72014 00:07:29.127 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:07:29.127 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:29.127 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:29.127 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72014 00:07:29.127 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:29.127 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72014 00:07:29.127 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:29.127 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72014 00:07:29.127 element at address: 0x20002826c280 with size: 0.000305 MiB 00:07:29.127 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:29.127 11:37:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:29.127 11:37:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72014 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 72014 ']' 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 72014 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72014 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72014' 00:07:29.127 killing process with pid 72014 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 72014 00:07:29.127 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 72014 00:07:29.386 00:07:29.386 real 0m1.271s 00:07:29.386 user 0m1.230s 00:07:29.386 sys 0m0.423s 00:07:29.386 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.386 ************************************ 00:07:29.386 END TEST dpdk_mem_utility 00:07:29.386 ************************************ 00:07:29.386 11:37:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:29.386 11:37:59 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:29.386 11:37:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.386 11:37:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.386 11:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.386 ************************************ 00:07:29.386 START TEST event 00:07:29.386 ************************************ 00:07:29.386 11:37:59 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:29.645 * Looking for test storage... 00:07:29.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.645 11:37:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.645 11:37:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.645 11:37:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.645 11:37:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.645 11:37:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.645 11:37:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.645 11:37:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.645 11:37:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.645 11:37:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.645 11:37:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.645 11:37:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.645 11:37:59 event -- scripts/common.sh@344 -- # case "$op" in 00:07:29.645 11:37:59 event -- scripts/common.sh@345 -- # : 1 00:07:29.645 11:37:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.645 11:37:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.645 11:37:59 event -- scripts/common.sh@365 -- # decimal 1 00:07:29.645 11:37:59 event -- scripts/common.sh@353 -- # local d=1 00:07:29.645 11:37:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.645 11:37:59 event -- scripts/common.sh@355 -- # echo 1 00:07:29.645 11:37:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.645 11:37:59 event -- scripts/common.sh@366 -- # decimal 2 00:07:29.645 11:37:59 event -- scripts/common.sh@353 -- # local d=2 00:07:29.645 11:37:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.645 11:37:59 event -- scripts/common.sh@355 -- # echo 2 00:07:29.645 11:37:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.645 11:37:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.645 11:37:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.645 11:37:59 event -- scripts/common.sh@368 -- # return 0 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.645 --rc genhtml_branch_coverage=1 00:07:29.645 --rc genhtml_function_coverage=1 00:07:29.645 --rc genhtml_legend=1 00:07:29.645 --rc geninfo_all_blocks=1 00:07:29.645 --rc geninfo_unexecuted_blocks=1 00:07:29.645 00:07:29.645 ' 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.645 --rc genhtml_branch_coverage=1 00:07:29.645 --rc genhtml_function_coverage=1 00:07:29.645 --rc genhtml_legend=1 00:07:29.645 --rc geninfo_all_blocks=1 00:07:29.645 --rc geninfo_unexecuted_blocks=1 00:07:29.645 00:07:29.645 ' 00:07:29.645 11:37:59 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.646 --rc genhtml_branch_coverage=1 00:07:29.646 --rc genhtml_function_coverage=1 00:07:29.646 --rc genhtml_legend=1 00:07:29.646 --rc geninfo_all_blocks=1 00:07:29.646 --rc geninfo_unexecuted_blocks=1 00:07:29.646 00:07:29.646 ' 00:07:29.646 11:37:59 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.646 --rc genhtml_branch_coverage=1 00:07:29.646 --rc genhtml_function_coverage=1 00:07:29.646 --rc genhtml_legend=1 00:07:29.646 --rc geninfo_all_blocks=1 00:07:29.646 --rc geninfo_unexecuted_blocks=1 00:07:29.646 00:07:29.646 ' 00:07:29.646 11:37:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:29.646 11:37:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.646 11:37:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:29.646 11:37:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:29.646 11:37:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.646 11:37:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.646 ************************************ 00:07:29.646 START TEST event_perf 00:07:29.646 ************************************ 00:07:29.646 11:37:59 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:29.646 Running I/O for 1 seconds...[2024-11-28 11:37:59.698701] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:29.646 [2024-11-28 11:37:59.698919] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72086 ] 00:07:29.905 [2024-11-28 11:37:59.821486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.905 [2024-11-28 11:37:59.848666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.905 [2024-11-28 11:37:59.906764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.905 [2024-11-28 11:37:59.906848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.905 [2024-11-28 11:37:59.906931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.905 Running I/O for 1 seconds...[2024-11-28 11:37:59.906930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.842 00:07:30.842 lcore 0: 202127 00:07:30.842 lcore 1: 202127 00:07:30.842 lcore 2: 202127 00:07:30.842 lcore 3: 202127 00:07:30.842 done. 00:07:30.842 ************************************ 00:07:30.842 END TEST event_perf 00:07:30.842 ************************************ 00:07:30.842 00:07:30.843 real 0m1.273s 00:07:30.843 user 0m4.095s 00:07:30.843 sys 0m0.058s 00:07:30.843 11:38:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.843 11:38:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 11:38:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:31.102 11:38:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:31.102 11:38:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.102 11:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 ************************************ 00:07:31.102 START TEST event_reactor 00:07:31.102 ************************************ 00:07:31.102 11:38:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:31.102 [2024-11-28 11:38:01.017039] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:31.102 [2024-11-28 11:38:01.017134] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72124 ] 00:07:31.102 [2024-11-28 11:38:01.137987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.102 [2024-11-28 11:38:01.167684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.102 [2024-11-28 11:38:01.204002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.486 test_start 00:07:32.486 oneshot 00:07:32.486 tick 100 00:07:32.486 tick 100 00:07:32.486 tick 250 00:07:32.486 tick 100 00:07:32.486 tick 100 00:07:32.486 tick 250 00:07:32.486 tick 500 00:07:32.486 tick 100 00:07:32.486 tick 100 00:07:32.486 tick 100 00:07:32.486 tick 250 00:07:32.486 tick 100 00:07:32.486 tick 100 00:07:32.486 test_end 00:07:32.486 ************************************ 00:07:32.486 END TEST event_reactor 00:07:32.486 ************************************ 00:07:32.486 00:07:32.486 real 0m1.247s 00:07:32.486 user 0m1.095s 00:07:32.486 sys 0m0.046s 00:07:32.486 11:38:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.486 11:38:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:32.486 11:38:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:32.486 11:38:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:32.486 11:38:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.486 11:38:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.486 ************************************ 00:07:32.486 START TEST event_reactor_perf 00:07:32.486 ************************************ 00:07:32.486 11:38:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:32.486 [2024-11-28 11:38:02.317953] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:32.486 [2024-11-28 11:38:02.318046] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72160 ] 00:07:32.486 [2024-11-28 11:38:02.438946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.486 [2024-11-28 11:38:02.463428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.486 [2024-11-28 11:38:02.493527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.435 test_start 00:07:33.435 test_end 00:07:33.435 Performance: 390755 events per second 00:07:33.435 ************************************ 00:07:33.435 END TEST event_reactor_perf 00:07:33.435 ************************************ 00:07:33.435 00:07:33.435 real 0m1.238s 00:07:33.435 user 0m1.089s 00:07:33.435 sys 0m0.042s 00:07:33.435 11:38:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.435 11:38:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.694 11:38:03 event -- event/event.sh@49 -- # uname -s 00:07:33.694 11:38:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:33.694 11:38:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:33.694 11:38:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.694 11:38:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.694 11:38:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.694 ************************************ 00:07:33.694 START TEST event_scheduler 00:07:33.694 ************************************ 00:07:33.694 11:38:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:33.694 * Looking for test storage... 00:07:33.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:33.694 11:38:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.694 11:38:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.694 11:38:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.694 11:38:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.694 11:38:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.694 11:38:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.695 11:38:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.695 --rc genhtml_branch_coverage=1 00:07:33.695 --rc genhtml_function_coverage=1 00:07:33.695 --rc genhtml_legend=1 00:07:33.695 --rc geninfo_all_blocks=1 00:07:33.695 --rc geninfo_unexecuted_blocks=1 00:07:33.695 00:07:33.695 ' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.695 --rc genhtml_branch_coverage=1 00:07:33.695 --rc genhtml_function_coverage=1 00:07:33.695 --rc genhtml_legend=1 00:07:33.695 --rc geninfo_all_blocks=1 00:07:33.695 --rc geninfo_unexecuted_blocks=1 00:07:33.695 00:07:33.695 ' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.695 --rc genhtml_branch_coverage=1 00:07:33.695 --rc genhtml_function_coverage=1 00:07:33.695 --rc genhtml_legend=1 00:07:33.695 --rc geninfo_all_blocks=1 00:07:33.695 --rc geninfo_unexecuted_blocks=1 00:07:33.695 00:07:33.695 ' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.695 --rc genhtml_branch_coverage=1 00:07:33.695 --rc genhtml_function_coverage=1 00:07:33.695 --rc genhtml_legend=1 00:07:33.695 --rc geninfo_all_blocks=1 00:07:33.695 --rc geninfo_unexecuted_blocks=1 00:07:33.695 00:07:33.695 ' 00:07:33.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.695 11:38:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:33.695 11:38:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72224 00:07:33.695 11:38:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.695 11:38:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72224 00:07:33.695 11:38:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72224 ']' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.695 11:38:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.955 [2024-11-28 11:38:03.833666] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:33.955 [2024-11-28 11:38:03.834629] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72224 ] 00:07:33.955 [2024-11-28 11:38:03.961431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.955 [2024-11-28 11:38:03.993732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.955 [2024-11-28 11:38:04.050545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.955 [2024-11-28 11:38:04.050672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.955 [2024-11-28 11:38:04.050780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.955 [2024-11-28 11:38:04.050787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:34.215 11:38:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:34.215 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:34.215 POWER: intel_pstate driver is not supported 00:07:34.215 POWER: cppc_cpufreq driver is not supported 00:07:34.215 POWER: amd-pstate driver is not supported 00:07:34.215 POWER: acpi-cpufreq driver is not supported 00:07:34.215 POWER: Unable to set Power Management Environment for lcore 0 00:07:34.215 [2024-11-28 11:38:04.117808] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:34.215 [2024-11-28 11:38:04.117844] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:34.215 [2024-11-28 11:38:04.117858] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:34.215 [2024-11-28 11:38:04.117874] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:34.215 [2024-11-28 11:38:04.117884] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:34.215 [2024-11-28 11:38:04.117893] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 [2024-11-28 11:38:04.184565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.215 [2024-11-28 11:38:04.224625] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 ************************************ 00:07:34.215 START TEST scheduler_create_thread 00:07:34.215 ************************************ 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 2 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 3 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 4 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 5 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.215 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 6 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 7 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 8 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 9 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 10 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.216 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.151 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.151 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:35.151 11:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:35.151 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.151 11:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.089 ************************************ 00:07:36.089 END TEST scheduler_create_thread 00:07:36.089 ************************************ 00:07:36.089 11:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.089 00:07:36.089 real 0m1.752s 00:07:36.089 user 0m0.011s 00:07:36.089 sys 0m0.007s 00:07:36.089 11:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.089 11:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.089 11:38:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:36.089 11:38:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72224 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72224 ']' 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72224 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72224 00:07:36.089 killing process with pid 72224 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72224' 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72224 00:07:36.089 11:38:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72224 00:07:36.348 [2024-11-28 11:38:06.467437] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:36.608 00:07:36.608 real 0m3.060s 00:07:36.608 user 0m3.859s 00:07:36.608 sys 0m0.357s 00:07:36.608 11:38:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.608 ************************************ 00:07:36.608 END TEST event_scheduler 00:07:36.608 ************************************ 00:07:36.608 11:38:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.608 11:38:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:36.608 11:38:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:36.608 11:38:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.608 11:38:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.608 11:38:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.608 ************************************ 00:07:36.608 START TEST app_repeat 00:07:36.608 ************************************ 00:07:36.608 11:38:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:36.608 11:38:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72307 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.609 Process app_repeat pid: 72307 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72307' 00:07:36.609 spdk_app_start Round 0 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:36.609 11:38:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72307 /var/tmp/spdk-nbd.sock 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72307 ']' 00:07:36.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.609 11:38:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 [2024-11-28 11:38:06.743073] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:36.867 [2024-11-28 11:38:06.743213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72307 ] 00:07:36.867 [2024-11-28 11:38:06.866445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.867 [2024-11-28 11:38:06.897242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.867 [2024-11-28 11:38:06.949540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.867 [2024-11-28 11:38:06.949554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.128 [2024-11-28 11:38:07.017665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.128 11:38:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.128 11:38:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:37.128 11:38:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.403 Malloc0 00:07:37.403 11:38:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.661 Malloc1 00:07:37.661 11:38:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.661 11:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:37.919 /dev/nbd0 00:07:37.919 11:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.919 11:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.919 11:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:37.919 11:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.920 1+0 records in 00:07:37.920 1+0 records out 00:07:37.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244031 s, 16.8 MB/s 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.920 11:38:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:37.920 11:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.920 11:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.920 11:38:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:38.178 /dev/nbd1 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.436 1+0 records in 00:07:38.436 1+0 records out 00:07:38.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267975 s, 15.3 MB/s 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.436 11:38:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.436 11:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.695 { 00:07:38.695 "nbd_device": "/dev/nbd0", 00:07:38.695 "bdev_name": "Malloc0" 00:07:38.695 }, 00:07:38.695 { 00:07:38.695 "nbd_device": "/dev/nbd1", 00:07:38.695 "bdev_name": "Malloc1" 00:07:38.695 } 00:07:38.695 ]' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.695 { 00:07:38.695 "nbd_device": "/dev/nbd0", 00:07:38.695 "bdev_name": "Malloc0" 00:07:38.695 }, 00:07:38.695 { 00:07:38.695 "nbd_device": "/dev/nbd1", 00:07:38.695 "bdev_name": "Malloc1" 00:07:38.695 } 00:07:38.695 ]' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:38.695 /dev/nbd1' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:38.695 /dev/nbd1' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:38.695 256+0 records in 00:07:38.695 256+0 records out 00:07:38.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111241 s, 94.3 MB/s 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:38.695 256+0 records in 00:07:38.695 256+0 records out 00:07:38.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242518 s, 43.2 MB/s 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:38.695 256+0 records in 00:07:38.695 256+0 records out 00:07:38.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026972 s, 38.9 MB/s 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.695 11:38:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.261 11:38:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.518 11:38:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:39.777 11:38:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:39.777 11:38:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:40.035 11:38:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:40.293 [2024-11-28 11:38:10.230131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.293 [2024-11-28 11:38:10.269818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.293 [2024-11-28 11:38:10.269827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.293 [2024-11-28 11:38:10.327336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.293 [2024-11-28 11:38:10.327447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:40.293 [2024-11-28 11:38:10.327464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.580 spdk_app_start Round 1 00:07:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.580 11:38:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.580 11:38:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:43.581 11:38:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72307 /var/tmp/spdk-nbd.sock 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72307 ']' 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.581 11:38:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:43.581 11:38:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.581 Malloc0 00:07:43.581 11:38:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.840 Malloc1 00:07:43.840 11:38:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.840 11:38:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.099 /dev/nbd0 00:07:44.099 11:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.099 11:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.099 1+0 records in 00:07:44.099 1+0 records out 00:07:44.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511201 s, 8.0 MB/s 00:07:44.099 11:38:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.357 11:38:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:44.357 11:38:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.357 11:38:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.357 11:38:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:44.357 11:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.357 11:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.357 11:38:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:44.618 /dev/nbd1 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.618 1+0 records in 00:07:44.618 1+0 records out 00:07:44.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247407 s, 16.6 MB/s 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.618 11:38:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.618 11:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.878 { 00:07:44.878 "nbd_device": "/dev/nbd0", 00:07:44.878 "bdev_name": "Malloc0" 00:07:44.878 }, 00:07:44.878 { 00:07:44.878 "nbd_device": "/dev/nbd1", 00:07:44.878 "bdev_name": "Malloc1" 00:07:44.878 } 00:07:44.878 ]' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.878 { 00:07:44.878 "nbd_device": "/dev/nbd0", 00:07:44.878 "bdev_name": "Malloc0" 00:07:44.878 }, 00:07:44.878 { 00:07:44.878 "nbd_device": "/dev/nbd1", 00:07:44.878 "bdev_name": "Malloc1" 00:07:44.878 } 00:07:44.878 ]' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:44.878 /dev/nbd1' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:44.878 /dev/nbd1' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:44.878 256+0 records in 00:07:44.878 256+0 records out 00:07:44.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105726 s, 99.2 MB/s 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:44.878 256+0 records in 00:07:44.878 256+0 records out 00:07:44.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237145 s, 44.2 MB/s 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:44.878 256+0 records in 00:07:44.878 256+0 records out 00:07:44.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341213 s, 30.7 MB/s 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.878 11:38:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.137 11:38:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.396 11:38:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:45.966 11:38:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:45.966 11:38:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:46.227 11:38:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:46.485 [2024-11-28 11:38:16.359929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.485 [2024-11-28 11:38:16.404202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.485 [2024-11-28 11:38:16.404212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.485 [2024-11-28 11:38:16.461605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.485 [2024-11-28 11:38:16.461725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:46.485 [2024-11-28 11:38:16.461749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:49.774 spdk_app_start Round 2 00:07:49.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:49.774 11:38:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:49.774 11:38:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:49.774 11:38:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72307 /var/tmp/spdk-nbd.sock 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72307 ']' 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.774 11:38:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:49.774 11:38:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:49.774 Malloc0 00:07:49.774 11:38:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.033 Malloc1 00:07:50.292 11:38:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.292 11:38:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:50.550 /dev/nbd0 00:07:50.550 11:38:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.550 11:38:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.550 1+0 records in 00:07:50.550 1+0 records out 00:07:50.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215585 s, 19.0 MB/s 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:50.550 11:38:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:50.550 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.550 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.550 11:38:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:50.809 /dev/nbd1 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.809 1+0 records in 00:07:50.809 1+0 records out 00:07:50.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325089 s, 12.6 MB/s 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:50.809 11:38:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.809 11:38:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.068 { 00:07:51.068 "nbd_device": "/dev/nbd0", 00:07:51.068 "bdev_name": "Malloc0" 00:07:51.068 }, 00:07:51.068 { 00:07:51.068 "nbd_device": "/dev/nbd1", 00:07:51.068 "bdev_name": "Malloc1" 00:07:51.068 } 00:07:51.068 ]' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.068 { 00:07:51.068 "nbd_device": "/dev/nbd0", 00:07:51.068 "bdev_name": "Malloc0" 00:07:51.068 }, 00:07:51.068 { 00:07:51.068 "nbd_device": "/dev/nbd1", 00:07:51.068 "bdev_name": "Malloc1" 00:07:51.068 } 00:07:51.068 ]' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:51.068 /dev/nbd1' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:51.068 /dev/nbd1' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:51.068 256+0 records in 00:07:51.068 256+0 records out 00:07:51.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115258 s, 91.0 MB/s 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.068 11:38:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:51.327 256+0 records in 00:07:51.327 256+0 records out 00:07:51.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253382 s, 41.4 MB/s 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.327 256+0 records in 00:07:51.327 256+0 records out 00:07:51.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273875 s, 38.3 MB/s 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.327 11:38:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.611 11:38:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.870 11:38:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.130 11:38:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.130 11:38:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:52.698 11:38:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:52.698 [2024-11-28 11:38:22.678127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.698 [2024-11-28 11:38:22.723333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.698 [2024-11-28 11:38:22.723335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.698 [2024-11-28 11:38:22.781514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.698 [2024-11-28 11:38:22.781586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.698 [2024-11-28 11:38:22.781602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:55.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:55.987 11:38:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72307 /var/tmp/spdk-nbd.sock 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72307 ']' 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:55.987 11:38:25 event.app_repeat -- event/event.sh@39 -- # killprocess 72307 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72307 ']' 00:07:55.987 11:38:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72307 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72307 00:07:55.988 killing process with pid 72307 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72307' 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72307 00:07:55.988 11:38:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72307 00:07:55.988 spdk_app_start is called in Round 0. 00:07:55.988 Shutdown signal received, stop current app iteration 00:07:55.988 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:55.988 spdk_app_start is called in Round 1. 00:07:55.988 Shutdown signal received, stop current app iteration 00:07:55.988 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:55.988 spdk_app_start is called in Round 2. 00:07:55.988 Shutdown signal received, stop current app iteration 00:07:55.988 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:55.988 spdk_app_start is called in Round 3. 00:07:55.988 Shutdown signal received, stop current app iteration 00:07:55.988 ************************************ 00:07:55.988 END TEST app_repeat 00:07:55.988 ************************************ 00:07:55.988 11:38:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:55.988 11:38:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:55.988 00:07:55.988 real 0m19.317s 00:07:55.988 user 0m44.152s 00:07:55.988 sys 0m2.920s 00:07:55.988 11:38:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.988 11:38:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.988 11:38:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:55.988 11:38:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:55.988 11:38:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.988 11:38:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.988 11:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.988 ************************************ 00:07:55.988 START TEST cpu_locks 00:07:55.988 ************************************ 00:07:55.988 11:38:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:56.248 * Looking for test storage... 00:07:56.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.248 11:38:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.248 --rc genhtml_branch_coverage=1 00:07:56.248 --rc genhtml_function_coverage=1 00:07:56.248 --rc genhtml_legend=1 00:07:56.248 --rc geninfo_all_blocks=1 00:07:56.248 --rc geninfo_unexecuted_blocks=1 00:07:56.248 00:07:56.248 ' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.248 --rc genhtml_branch_coverage=1 00:07:56.248 --rc genhtml_function_coverage=1 00:07:56.248 --rc genhtml_legend=1 00:07:56.248 --rc geninfo_all_blocks=1 00:07:56.248 --rc geninfo_unexecuted_blocks=1 00:07:56.248 00:07:56.248 ' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.248 --rc genhtml_branch_coverage=1 00:07:56.248 --rc genhtml_function_coverage=1 00:07:56.248 --rc genhtml_legend=1 00:07:56.248 --rc geninfo_all_blocks=1 00:07:56.248 --rc geninfo_unexecuted_blocks=1 00:07:56.248 00:07:56.248 ' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.248 --rc genhtml_branch_coverage=1 00:07:56.248 --rc genhtml_function_coverage=1 00:07:56.248 --rc genhtml_legend=1 00:07:56.248 --rc geninfo_all_blocks=1 00:07:56.248 --rc geninfo_unexecuted_blocks=1 00:07:56.248 00:07:56.248 ' 00:07:56.248 11:38:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:56.248 11:38:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:56.248 11:38:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:56.248 11:38:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.248 11:38:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.248 ************************************ 00:07:56.248 START TEST default_locks 00:07:56.248 ************************************ 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72756 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72756 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72756 ']' 00:07:56.248 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.249 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.249 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.249 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.249 11:38:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 [2024-11-28 11:38:26.367649] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:56.249 [2024-11-28 11:38:26.368102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72756 ] 00:07:56.507 [2024-11-28 11:38:26.497516] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.507 [2024-11-28 11:38:26.531366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.507 [2024-11-28 11:38:26.605334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.766 [2024-11-28 11:38:26.708995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.362 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.362 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:57.362 11:38:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72756 00:07:57.362 11:38:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72756 00:07:57.362 11:38:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72756 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72756 ']' 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72756 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72756 00:07:57.937 killing process with pid 72756 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72756' 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72756 00:07:57.937 11:38:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72756 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72756 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72756 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.196 ERROR: process (pid: 72756) is no longer running 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72756 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72756 ']' 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72756) - No such process 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:58.196 ************************************ 00:07:58.196 END TEST default_locks 00:07:58.196 ************************************ 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:58.196 00:07:58.196 real 0m1.939s 00:07:58.196 user 0m2.024s 00:07:58.196 sys 0m0.647s 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.196 11:38:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.196 11:38:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:58.196 11:38:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.196 11:38:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.196 11:38:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.196 ************************************ 00:07:58.196 START TEST default_locks_via_rpc 00:07:58.196 ************************************ 00:07:58.196 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.196 11:38:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72808 00:07:58.196 11:38:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72808 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72808 ']' 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.197 11:38:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 [2024-11-28 11:38:28.355270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:58.456 [2024-11-28 11:38:28.355720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72808 ] 00:07:58.456 [2024-11-28 11:38:28.482253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.456 [2024-11-28 11:38:28.508271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.456 [2024-11-28 11:38:28.560242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.716 [2024-11-28 11:38:28.629916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72808 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72808 00:07:59.285 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72808 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72808 ']' 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72808 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72808 00:07:59.853 killing process with pid 72808 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72808' 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72808 00:07:59.853 11:38:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72808 00:08:00.112 ************************************ 00:08:00.112 END TEST default_locks_via_rpc 00:08:00.112 ************************************ 00:08:00.112 00:08:00.112 real 0m1.855s 00:08:00.112 user 0m2.029s 00:08:00.112 sys 0m0.532s 00:08:00.112 11:38:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.112 11:38:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.112 11:38:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:00.112 11:38:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.112 11:38:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.112 11:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.112 ************************************ 00:08:00.112 START TEST non_locking_app_on_locked_coremask 00:08:00.112 ************************************ 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72859 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72859 /var/tmp/spdk.sock 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72859 ']' 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.112 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.113 11:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 [2024-11-28 11:38:30.267818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:00.372 [2024-11-28 11:38:30.267923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72859 ] 00:08:00.372 [2024-11-28 11:38:30.393253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.372 [2024-11-28 11:38:30.412948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.372 [2024-11-28 11:38:30.453600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.630 [2024-11-28 11:38:30.525767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72875 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72875 /var/tmp/spdk2.sock 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72875 ']' 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.199 11:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.458 [2024-11-28 11:38:31.380606] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:01.458 [2024-11-28 11:38:31.380966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72875 ] 00:08:01.458 [2024-11-28 11:38:31.510210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.458 [2024-11-28 11:38:31.549547] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.459 [2024-11-28 11:38:31.549584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.718 [2024-11-28 11:38:31.654867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.718 [2024-11-28 11:38:31.797305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.656 11:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.656 11:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:02.656 11:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72859 00:08:02.656 11:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.656 11:38:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72859 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72859 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72859 ']' 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72859 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72859 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.225 killing process with pid 72859 00:08:03.225 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72859' 00:08:03.226 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72859 00:08:03.226 11:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72859 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72875 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72875 ']' 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72875 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72875 00:08:04.163 killing process with pid 72875 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.163 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.164 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72875' 00:08:04.164 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72875 00:08:04.164 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72875 00:08:04.423 ************************************ 00:08:04.423 END TEST non_locking_app_on_locked_coremask 00:08:04.423 ************************************ 00:08:04.423 00:08:04.423 real 0m4.251s 00:08:04.423 user 0m4.851s 00:08:04.423 sys 0m1.188s 00:08:04.423 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.423 11:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.423 11:38:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:04.423 11:38:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.423 11:38:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.423 11:38:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.423 ************************************ 00:08:04.423 START TEST locking_app_on_unlocked_coremask 00:08:04.423 ************************************ 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:04.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72942 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72942 /var/tmp/spdk.sock 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72942 ']' 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.423 11:38:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.683 [2024-11-28 11:38:34.567936] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:04.683 [2024-11-28 11:38:34.568044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72942 ] 00:08:04.683 [2024-11-28 11:38:34.693986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.683 [2024-11-28 11:38:34.724834] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.683 [2024-11-28 11:38:34.724888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.683 [2024-11-28 11:38:34.777639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.942 [2024-11-28 11:38:34.852244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72958 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72958 /var/tmp/spdk2.sock 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72958 ']' 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.508 11:38:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.767 [2024-11-28 11:38:35.650514] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:05.767 [2024-11-28 11:38:35.650995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:08:05.767 [2024-11-28 11:38:35.779626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.767 [2024-11-28 11:38:35.819301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.026 [2024-11-28 11:38:35.932432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.026 [2024-11-28 11:38:36.092961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.964 11:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.964 11:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:06.964 11:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72958 00:08:06.964 11:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72958 00:08:06.964 11:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72942 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72942 ']' 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72942 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72942 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.532 killing process with pid 72942 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72942' 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72942 00:08:07.532 11:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72942 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72958 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72958 ']' 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72958 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72958 00:08:08.471 killing process with pid 72958 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72958' 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72958 00:08:08.471 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72958 00:08:08.730 00:08:08.730 real 0m4.232s 00:08:08.730 user 0m4.808s 00:08:08.730 sys 0m1.206s 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.730 ************************************ 00:08:08.730 END TEST locking_app_on_unlocked_coremask 00:08:08.730 ************************************ 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.730 11:38:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:08.730 11:38:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.730 11:38:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.730 11:38:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.730 ************************************ 00:08:08.730 START TEST locking_app_on_locked_coremask 00:08:08.730 ************************************ 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73025 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73025 /var/tmp/spdk.sock 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73025 ']' 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.730 11:38:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.989 [2024-11-28 11:38:38.865908] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:08.989 [2024-11-28 11:38:38.866032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73025 ] 00:08:08.989 [2024-11-28 11:38:38.994326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.989 [2024-11-28 11:38:39.016909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.989 [2024-11-28 11:38:39.068388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.248 [2024-11-28 11:38:39.141784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.248 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.248 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:09.248 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73034 00:08:09.248 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73034 /var/tmp/spdk2.sock 00:08:09.248 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73034 /var/tmp/spdk2.sock 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73034 /var/tmp/spdk2.sock 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73034 ']' 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.249 11:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.508 [2024-11-28 11:38:39.405447] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:09.509 [2024-11-28 11:38:39.405820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73034 ] 00:08:09.509 [2024-11-28 11:38:39.527742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.509 [2024-11-28 11:38:39.572561] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73025 has claimed it. 00:08:09.509 [2024-11-28 11:38:39.572643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:10.078 ERROR: process (pid: 73034) is no longer running 00:08:10.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73034) - No such process 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73025 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73025 00:08:10.078 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73025 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73025 ']' 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73025 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73025 00:08:10.645 killing process with pid 73025 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73025' 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73025 00:08:10.645 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73025 00:08:10.905 00:08:10.905 real 0m2.101s 00:08:10.905 user 0m2.310s 00:08:10.905 sys 0m0.604s 00:08:10.905 ************************************ 00:08:10.905 END TEST locking_app_on_locked_coremask 00:08:10.905 ************************************ 00:08:10.905 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.905 11:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.905 11:38:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:10.905 11:38:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.905 11:38:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.905 11:38:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.905 ************************************ 00:08:10.905 START TEST locking_overlapped_coremask 00:08:10.905 ************************************ 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:10.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73079 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73079 /var/tmp/spdk.sock 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73079 ']' 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.905 11:38:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.905 [2024-11-28 11:38:41.002254] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:10.905 [2024-11-28 11:38:41.002595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73079 ] 00:08:11.164 [2024-11-28 11:38:41.125164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.164 [2024-11-28 11:38:41.150596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.164 [2024-11-28 11:38:41.205397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.164 [2024-11-28 11:38:41.205565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.164 [2024-11-28 11:38:41.205566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.164 [2024-11-28 11:38:41.273569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.423 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.423 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.423 11:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73090 00:08:11.423 11:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:11.423 11:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73090 /var/tmp/spdk2.sock 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73090 /var/tmp/spdk2.sock 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73090 /var/tmp/spdk2.sock 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73090 ']' 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.424 11:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.424 [2024-11-28 11:38:41.544083] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:11.424 [2024-11-28 11:38:41.544198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73090 ] 00:08:11.683 [2024-11-28 11:38:41.673751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.683 [2024-11-28 11:38:41.715851] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73079 has claimed it. 00:08:11.683 [2024-11-28 11:38:41.715926] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:12.252 ERROR: process (pid: 73090) is no longer running 00:08:12.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73090) - No such process 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73079 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 73079 ']' 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 73079 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73079 00:08:12.252 killing process with pid 73079 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73079' 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 73079 00:08:12.252 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 73079 00:08:12.823 00:08:12.823 real 0m1.763s 00:08:12.823 user 0m4.812s 00:08:12.823 sys 0m0.443s 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 ************************************ 00:08:12.823 END TEST locking_overlapped_coremask 00:08:12.823 ************************************ 00:08:12.823 11:38:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:12.823 11:38:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.823 11:38:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.823 11:38:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 ************************************ 00:08:12.823 START TEST locking_overlapped_coremask_via_rpc 00:08:12.823 ************************************ 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73135 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73135 /var/tmp/spdk.sock 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73135 ']' 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.823 11:38:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.823 [2024-11-28 11:38:42.835890] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:12.823 [2024-11-28 11:38:42.836011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73135 ] 00:08:13.081 [2024-11-28 11:38:42.964372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.081 [2024-11-28 11:38:42.990154] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.081 [2024-11-28 11:38:42.990571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.081 [2024-11-28 11:38:43.044217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.081 [2024-11-28 11:38:43.044120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.081 [2024-11-28 11:38:43.044209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.081 [2024-11-28 11:38:43.116306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73153 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73153 /var/tmp/spdk2.sock 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73153 ']' 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.020 11:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.020 [2024-11-28 11:38:43.885487] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:14.020 [2024-11-28 11:38:43.885642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73153 ] 00:08:14.020 [2024-11-28 11:38:44.022393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.020 [2024-11-28 11:38:44.063183] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.020 [2024-11-28 11:38:44.063225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.280 [2024-11-28 11:38:44.171774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.280 [2024-11-28 11:38:44.175461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:14.280 [2024-11-28 11:38:44.175463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.280 [2024-11-28 11:38:44.363293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.848 [2024-11-28 11:38:44.880583] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73135 has claimed it. 00:08:14.848 request: 00:08:14.848 { 00:08:14.848 "method": "framework_enable_cpumask_locks", 00:08:14.848 "req_id": 1 00:08:14.848 } 00:08:14.848 Got JSON-RPC error response 00:08:14.848 response: 00:08:14.848 { 00:08:14.848 "code": -32603, 00:08:14.848 "message": "Failed to claim CPU core: 2" 00:08:14.848 } 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73135 /var/tmp/spdk.sock 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73135 ']' 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.848 11:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73153 /var/tmp/spdk2.sock 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73153 ']' 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.111 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.679 ************************************ 00:08:15.679 END TEST locking_overlapped_coremask_via_rpc 00:08:15.679 ************************************ 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:15.679 00:08:15.679 real 0m2.759s 00:08:15.679 user 0m1.483s 00:08:15.679 sys 0m0.213s 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.679 11:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.679 11:38:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:15.679 11:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73135 ]] 00:08:15.679 11:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73135 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73135 ']' 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73135 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73135 00:08:15.679 killing process with pid 73135 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73135' 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73135 00:08:15.679 11:38:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73135 00:08:15.939 11:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73153 ]] 00:08:15.939 11:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73153 00:08:15.939 11:38:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73153 ']' 00:08:15.939 11:38:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73153 00:08:15.939 11:38:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.939 11:38:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.939 11:38:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73153 00:08:15.939 killing process with pid 73153 00:08:15.940 11:38:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:15.940 11:38:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:15.940 11:38:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73153' 00:08:15.940 11:38:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73153 00:08:15.940 11:38:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73153 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.508 Process with pid 73135 is not found 00:08:16.508 Process with pid 73153 is not found 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73135 ]] 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73135 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73135 ']' 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73135 00:08:16.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73135) - No such process 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73135 is not found' 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73153 ]] 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73153 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73153 ']' 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73153 00:08:16.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73153) - No such process 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73153 is not found' 00:08:16.508 11:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.508 ************************************ 00:08:16.508 END TEST cpu_locks 00:08:16.508 ************************************ 00:08:16.508 00:08:16.508 real 0m20.530s 00:08:16.508 user 0m36.172s 00:08:16.508 sys 0m5.902s 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.508 11:38:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.767 ************************************ 00:08:16.767 END TEST event 00:08:16.767 ************************************ 00:08:16.767 00:08:16.767 real 0m47.169s 00:08:16.767 user 1m30.666s 00:08:16.767 sys 0m9.606s 00:08:16.767 11:38:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.767 11:38:46 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.767 11:38:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:16.767 11:38:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.767 11:38:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.767 11:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.767 ************************************ 00:08:16.767 START TEST thread 00:08:16.767 ************************************ 00:08:16.767 11:38:46 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:16.767 * Looking for test storage... 00:08:16.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:16.767 11:38:46 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.767 11:38:46 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.767 11:38:46 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.767 11:38:46 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.767 11:38:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.767 11:38:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.767 11:38:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.767 11:38:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.767 11:38:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.767 11:38:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.767 11:38:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.767 11:38:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.767 11:38:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.767 11:38:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.767 11:38:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.767 11:38:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:16.767 11:38:46 thread -- scripts/common.sh@345 -- # : 1 00:08:16.767 11:38:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.767 11:38:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.767 11:38:46 thread -- scripts/common.sh@365 -- # decimal 1 00:08:16.767 11:38:46 thread -- scripts/common.sh@353 -- # local d=1 00:08:16.767 11:38:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.767 11:38:46 thread -- scripts/common.sh@355 -- # echo 1 00:08:16.767 11:38:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.026 11:38:46 thread -- scripts/common.sh@366 -- # decimal 2 00:08:17.026 11:38:46 thread -- scripts/common.sh@353 -- # local d=2 00:08:17.026 11:38:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.026 11:38:46 thread -- scripts/common.sh@355 -- # echo 2 00:08:17.026 11:38:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.026 11:38:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.026 11:38:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.026 11:38:46 thread -- scripts/common.sh@368 -- # return 0 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.026 --rc genhtml_branch_coverage=1 00:08:17.026 --rc genhtml_function_coverage=1 00:08:17.026 --rc genhtml_legend=1 00:08:17.026 --rc geninfo_all_blocks=1 00:08:17.026 --rc geninfo_unexecuted_blocks=1 00:08:17.026 00:08:17.026 ' 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.026 --rc genhtml_branch_coverage=1 00:08:17.026 --rc genhtml_function_coverage=1 00:08:17.026 --rc genhtml_legend=1 00:08:17.026 --rc geninfo_all_blocks=1 00:08:17.026 --rc geninfo_unexecuted_blocks=1 00:08:17.026 00:08:17.026 ' 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.026 --rc genhtml_branch_coverage=1 00:08:17.026 --rc genhtml_function_coverage=1 00:08:17.026 --rc genhtml_legend=1 00:08:17.026 --rc geninfo_all_blocks=1 00:08:17.026 --rc geninfo_unexecuted_blocks=1 00:08:17.026 00:08:17.026 ' 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.026 --rc genhtml_branch_coverage=1 00:08:17.026 --rc genhtml_function_coverage=1 00:08:17.026 --rc genhtml_legend=1 00:08:17.026 --rc geninfo_all_blocks=1 00:08:17.026 --rc geninfo_unexecuted_blocks=1 00:08:17.026 00:08:17.026 ' 00:08:17.026 11:38:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:17.026 11:38:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:17.027 11:38:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.027 11:38:46 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.027 ************************************ 00:08:17.027 START TEST thread_poller_perf 00:08:17.027 ************************************ 00:08:17.027 11:38:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:17.027 [2024-11-28 11:38:46.929692] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:17.027 [2024-11-28 11:38:46.929964] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73289 ] 00:08:17.027 [2024-11-28 11:38:47.049228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.027 [2024-11-28 11:38:47.078672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.027 [2024-11-28 11:38:47.127228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.027 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:18.403 [2024-11-28T11:38:48.529Z] ====================================== 00:08:18.403 [2024-11-28T11:38:48.529Z] busy:2211239812 (cyc) 00:08:18.403 [2024-11-28T11:38:48.529Z] total_run_count: 300000 00:08:18.403 [2024-11-28T11:38:48.529Z] tsc_hz: 2200000000 (cyc) 00:08:18.403 [2024-11-28T11:38:48.529Z] ====================================== 00:08:18.403 [2024-11-28T11:38:48.529Z] poller_cost: 7370 (cyc), 3350 (nsec) 00:08:18.403 ************************************ 00:08:18.403 END TEST thread_poller_perf 00:08:18.403 ************************************ 00:08:18.403 00:08:18.403 real 0m1.266s 00:08:18.403 user 0m1.114s 00:08:18.403 sys 0m0.044s 00:08:18.403 11:38:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.403 11:38:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.403 11:38:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:18.403 11:38:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:18.403 11:38:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.403 11:38:48 thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.403 ************************************ 00:08:18.403 START TEST thread_poller_perf 00:08:18.404 ************************************ 00:08:18.404 11:38:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:18.404 [2024-11-28 11:38:48.253092] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:18.404 [2024-11-28 11:38:48.253389] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:08:18.404 [2024-11-28 11:38:48.371249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.404 [2024-11-28 11:38:48.398327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.404 [2024-11-28 11:38:48.440516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.404 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:19.781 [2024-11-28T11:38:49.907Z] ====================================== 00:08:19.781 [2024-11-28T11:38:49.907Z] busy:2202101654 (cyc) 00:08:19.781 [2024-11-28T11:38:49.907Z] total_run_count: 4347000 00:08:19.781 [2024-11-28T11:38:49.907Z] tsc_hz: 2200000000 (cyc) 00:08:19.781 [2024-11-28T11:38:49.907Z] ====================================== 00:08:19.781 [2024-11-28T11:38:49.907Z] poller_cost: 506 (cyc), 230 (nsec) 00:08:19.781 ************************************ 00:08:19.781 END TEST thread_poller_perf 00:08:19.781 ************************************ 00:08:19.781 00:08:19.781 real 0m1.248s 00:08:19.781 user 0m1.096s 00:08:19.781 sys 0m0.044s 00:08:19.781 11:38:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.781 11:38:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.781 11:38:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:19.781 ************************************ 00:08:19.781 END TEST thread 00:08:19.781 ************************************ 00:08:19.781 00:08:19.781 real 0m2.820s 00:08:19.781 user 0m2.363s 00:08:19.781 sys 0m0.238s 00:08:19.781 11:38:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.781 11:38:49 thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.781 11:38:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:19.781 11:38:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.781 11:38:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.781 11:38:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.781 11:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:19.782 ************************************ 00:08:19.782 START TEST app_cmdline 00:08:19.782 ************************************ 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.782 * Looking for test storage... 00:08:19.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.782 11:38:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.782 --rc genhtml_branch_coverage=1 00:08:19.782 --rc genhtml_function_coverage=1 00:08:19.782 --rc genhtml_legend=1 00:08:19.782 --rc geninfo_all_blocks=1 00:08:19.782 --rc geninfo_unexecuted_blocks=1 00:08:19.782 00:08:19.782 ' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.782 --rc genhtml_branch_coverage=1 00:08:19.782 --rc genhtml_function_coverage=1 00:08:19.782 --rc genhtml_legend=1 00:08:19.782 --rc geninfo_all_blocks=1 00:08:19.782 --rc geninfo_unexecuted_blocks=1 00:08:19.782 00:08:19.782 ' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.782 --rc genhtml_branch_coverage=1 00:08:19.782 --rc genhtml_function_coverage=1 00:08:19.782 --rc genhtml_legend=1 00:08:19.782 --rc geninfo_all_blocks=1 00:08:19.782 --rc geninfo_unexecuted_blocks=1 00:08:19.782 00:08:19.782 ' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.782 --rc genhtml_branch_coverage=1 00:08:19.782 --rc genhtml_function_coverage=1 00:08:19.782 --rc genhtml_legend=1 00:08:19.782 --rc geninfo_all_blocks=1 00:08:19.782 --rc geninfo_unexecuted_blocks=1 00:08:19.782 00:08:19.782 ' 00:08:19.782 11:38:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:19.782 11:38:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73402 00:08:19.782 11:38:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:19.782 11:38:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73402 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73402 ']' 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.782 11:38:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.782 [2024-11-28 11:38:49.860539] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:19.782 [2024-11-28 11:38:49.860903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73402 ] 00:08:20.041 [2024-11-28 11:38:49.992057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.041 [2024-11-28 11:38:50.020292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.041 [2024-11-28 11:38:50.065858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.041 [2024-11-28 11:38:50.135967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.356 11:38:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.356 11:38:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:20.356 11:38:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:20.622 { 00:08:20.622 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:08:20.622 "fields": { 00:08:20.622 "major": 25, 00:08:20.622 "minor": 1, 00:08:20.622 "patch": 0, 00:08:20.622 "suffix": "-pre", 00:08:20.622 "commit": "35cd3e84d" 00:08:20.622 } 00:08:20.622 } 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:20.622 11:38:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:20.622 11:38:50 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.880 request: 00:08:20.880 { 00:08:20.880 "method": "env_dpdk_get_mem_stats", 00:08:20.880 "req_id": 1 00:08:20.880 } 00:08:20.880 Got JSON-RPC error response 00:08:20.880 response: 00:08:20.880 { 00:08:20.880 "code": -32601, 00:08:20.880 "message": "Method not found" 00:08:20.880 } 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.880 11:38:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73402 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73402 ']' 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73402 00:08:20.880 11:38:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:20.880 11:38:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73402 00:08:21.139 killing process with pid 73402 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73402' 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 73402 00:08:21.139 11:38:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 73402 00:08:21.398 00:08:21.398 real 0m1.836s 00:08:21.398 user 0m2.247s 00:08:21.398 sys 0m0.491s 00:08:21.398 11:38:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.398 ************************************ 00:08:21.398 END TEST app_cmdline 00:08:21.398 ************************************ 00:08:21.398 11:38:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.398 11:38:51 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.398 11:38:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.398 11:38:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.398 11:38:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.398 ************************************ 00:08:21.398 START TEST version 00:08:21.398 ************************************ 00:08:21.398 11:38:51 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.657 * Looking for test storage... 00:08:21.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.657 11:38:51 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.657 11:38:51 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.657 11:38:51 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.657 11:38:51 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.657 11:38:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.657 11:38:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.657 11:38:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.657 11:38:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.657 11:38:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.657 11:38:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.657 11:38:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.657 11:38:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.657 11:38:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.657 11:38:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.657 11:38:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.658 11:38:51 version -- scripts/common.sh@344 -- # case "$op" in 00:08:21.658 11:38:51 version -- scripts/common.sh@345 -- # : 1 00:08:21.658 11:38:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.658 11:38:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.658 11:38:51 version -- scripts/common.sh@365 -- # decimal 1 00:08:21.658 11:38:51 version -- scripts/common.sh@353 -- # local d=1 00:08:21.658 11:38:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.658 11:38:51 version -- scripts/common.sh@355 -- # echo 1 00:08:21.658 11:38:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.658 11:38:51 version -- scripts/common.sh@366 -- # decimal 2 00:08:21.658 11:38:51 version -- scripts/common.sh@353 -- # local d=2 00:08:21.658 11:38:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.658 11:38:51 version -- scripts/common.sh@355 -- # echo 2 00:08:21.658 11:38:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.658 11:38:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.658 11:38:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.658 11:38:51 version -- scripts/common.sh@368 -- # return 0 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.658 --rc genhtml_branch_coverage=1 00:08:21.658 --rc genhtml_function_coverage=1 00:08:21.658 --rc genhtml_legend=1 00:08:21.658 --rc geninfo_all_blocks=1 00:08:21.658 --rc geninfo_unexecuted_blocks=1 00:08:21.658 00:08:21.658 ' 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.658 --rc genhtml_branch_coverage=1 00:08:21.658 --rc genhtml_function_coverage=1 00:08:21.658 --rc genhtml_legend=1 00:08:21.658 --rc geninfo_all_blocks=1 00:08:21.658 --rc geninfo_unexecuted_blocks=1 00:08:21.658 00:08:21.658 ' 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.658 --rc genhtml_branch_coverage=1 00:08:21.658 --rc genhtml_function_coverage=1 00:08:21.658 --rc genhtml_legend=1 00:08:21.658 --rc geninfo_all_blocks=1 00:08:21.658 --rc geninfo_unexecuted_blocks=1 00:08:21.658 00:08:21.658 ' 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.658 --rc genhtml_branch_coverage=1 00:08:21.658 --rc genhtml_function_coverage=1 00:08:21.658 --rc genhtml_legend=1 00:08:21.658 --rc geninfo_all_blocks=1 00:08:21.658 --rc geninfo_unexecuted_blocks=1 00:08:21.658 00:08:21.658 ' 00:08:21.658 11:38:51 version -- app/version.sh@17 -- # get_header_version major 00:08:21.658 11:38:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # cut -f2 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.658 11:38:51 version -- app/version.sh@17 -- # major=25 00:08:21.658 11:38:51 version -- app/version.sh@18 -- # get_header_version minor 00:08:21.658 11:38:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # cut -f2 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.658 11:38:51 version -- app/version.sh@18 -- # minor=1 00:08:21.658 11:38:51 version -- app/version.sh@19 -- # get_header_version patch 00:08:21.658 11:38:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # cut -f2 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.658 11:38:51 version -- app/version.sh@19 -- # patch=0 00:08:21.658 11:38:51 version -- app/version.sh@20 -- # get_header_version suffix 00:08:21.658 11:38:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.658 11:38:51 version -- app/version.sh@14 -- # cut -f2 00:08:21.658 11:38:51 version -- app/version.sh@20 -- # suffix=-pre 00:08:21.658 11:38:51 version -- app/version.sh@22 -- # version=25.1 00:08:21.658 11:38:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:21.658 11:38:51 version -- app/version.sh@28 -- # version=25.1rc0 00:08:21.658 11:38:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:21.658 11:38:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.658 11:38:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:21.658 11:38:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:21.658 ************************************ 00:08:21.658 END TEST version 00:08:21.658 ************************************ 00:08:21.658 00:08:21.658 real 0m0.275s 00:08:21.658 user 0m0.175s 00:08:21.658 sys 0m0.135s 00:08:21.658 11:38:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.658 11:38:51 version -- common/autotest_common.sh@10 -- # set +x 00:08:21.917 11:38:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:21.917 11:38:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:21.917 11:38:51 -- spdk/autotest.sh@194 -- # uname -s 00:08:21.917 11:38:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:21.917 11:38:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.917 11:38:51 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:21.917 11:38:51 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:21.917 11:38:51 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:21.917 11:38:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.917 11:38:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.917 11:38:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.917 ************************************ 00:08:21.917 START TEST spdk_dd 00:08:21.917 ************************************ 00:08:21.917 11:38:51 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:21.917 * Looking for test storage... 00:08:21.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:21.917 11:38:51 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.917 11:38:51 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.917 11:38:51 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.917 11:38:51 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:21.917 11:38:51 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:21.918 11:38:51 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.918 11:38:51 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.918 --rc genhtml_branch_coverage=1 00:08:21.918 --rc genhtml_function_coverage=1 00:08:21.918 --rc genhtml_legend=1 00:08:21.918 --rc geninfo_all_blocks=1 00:08:21.918 --rc geninfo_unexecuted_blocks=1 00:08:21.918 00:08:21.918 ' 00:08:21.918 11:38:51 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.918 --rc genhtml_branch_coverage=1 00:08:21.918 --rc genhtml_function_coverage=1 00:08:21.918 --rc genhtml_legend=1 00:08:21.918 --rc geninfo_all_blocks=1 00:08:21.918 --rc geninfo_unexecuted_blocks=1 00:08:21.918 00:08:21.918 ' 00:08:21.918 11:38:51 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.918 --rc genhtml_branch_coverage=1 00:08:21.918 --rc genhtml_function_coverage=1 00:08:21.918 --rc genhtml_legend=1 00:08:21.918 --rc geninfo_all_blocks=1 00:08:21.918 --rc geninfo_unexecuted_blocks=1 00:08:21.918 00:08:21.918 ' 00:08:21.918 11:38:51 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.918 --rc genhtml_branch_coverage=1 00:08:21.918 --rc genhtml_function_coverage=1 00:08:21.918 --rc genhtml_legend=1 00:08:21.918 --rc geninfo_all_blocks=1 00:08:21.918 --rc geninfo_unexecuted_blocks=1 00:08:21.918 00:08:21.918 ' 00:08:21.918 11:38:51 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.918 11:38:51 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.918 11:38:52 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.918 11:38:52 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.918 11:38:52 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.918 11:38:52 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.918 11:38:52 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.918 11:38:52 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.918 11:38:52 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:21.918 11:38:52 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.918 11:38:52 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:22.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:22.488 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:22.488 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:22.488 11:38:52 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:22.488 11:38:52 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:22.488 11:38:52 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:22.489 11:38:52 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:22.489 11:38:52 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:22.489 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_acpi.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_amd_pstate.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_cppc.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_intel_pstate.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_intel_uncore.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_kvm_vm.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.25 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:22.490 * spdk_dd linked to liburing 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:22.490 11:38:52 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:08:22.490 11:38:52 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:22.491 11:38:52 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:22.491 11:38:52 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:22.491 11:38:52 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:22.491 11:38:52 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:22.491 11:38:52 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:22.491 11:38:52 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:22.491 11:38:52 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:22.491 11:38:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.491 11:38:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.491 11:38:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.491 ************************************ 00:08:22.491 START TEST spdk_dd_basic_rw 00:08:22.491 ************************************ 00:08:22.491 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:22.751 * Looking for test storage... 00:08:22.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.751 --rc genhtml_branch_coverage=1 00:08:22.751 --rc genhtml_function_coverage=1 00:08:22.751 --rc genhtml_legend=1 00:08:22.751 --rc geninfo_all_blocks=1 00:08:22.751 --rc geninfo_unexecuted_blocks=1 00:08:22.751 00:08:22.751 ' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.751 --rc genhtml_branch_coverage=1 00:08:22.751 --rc genhtml_function_coverage=1 00:08:22.751 --rc genhtml_legend=1 00:08:22.751 --rc geninfo_all_blocks=1 00:08:22.751 --rc geninfo_unexecuted_blocks=1 00:08:22.751 00:08:22.751 ' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.751 --rc genhtml_branch_coverage=1 00:08:22.751 --rc genhtml_function_coverage=1 00:08:22.751 --rc genhtml_legend=1 00:08:22.751 --rc geninfo_all_blocks=1 00:08:22.751 --rc geninfo_unexecuted_blocks=1 00:08:22.751 00:08:22.751 ' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.751 --rc genhtml_branch_coverage=1 00:08:22.751 --rc genhtml_function_coverage=1 00:08:22.751 --rc genhtml_legend=1 00:08:22.751 --rc geninfo_all_blocks=1 00:08:22.751 --rc geninfo_unexecuted_blocks=1 00:08:22.751 00:08:22.751 ' 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:22.751 11:38:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:22.752 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:23.013 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:23.013 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:23.013 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.014 ************************************ 00:08:23.014 START TEST dd_bs_lt_native_bs 00:08:23.014 ************************************ 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.014 11:38:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.014 { 00:08:23.014 "subsystems": [ 00:08:23.014 { 00:08:23.014 "subsystem": "bdev", 00:08:23.014 "config": [ 00:08:23.014 { 00:08:23.014 "params": { 00:08:23.014 "trtype": "pcie", 00:08:23.014 "traddr": "0000:00:10.0", 00:08:23.014 "name": "Nvme0" 00:08:23.014 }, 00:08:23.014 "method": "bdev_nvme_attach_controller" 00:08:23.014 }, 00:08:23.014 { 00:08:23.014 "method": "bdev_wait_for_examine" 00:08:23.014 } 00:08:23.014 ] 00:08:23.014 } 00:08:23.014 ] 00:08:23.014 } 00:08:23.014 [2024-11-28 11:38:53.036763] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:23.014 [2024-11-28 11:38:53.036864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73746 ] 00:08:23.272 [2024-11-28 11:38:53.163407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.272 [2024-11-28 11:38:53.195375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.272 [2024-11-28 11:38:53.239340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.272 [2024-11-28 11:38:53.296925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.530 [2024-11-28 11:38:53.406802] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:23.530 [2024-11-28 11:38:53.406872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.530 [2024-11-28 11:38:53.535177] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.530 ************************************ 00:08:23.530 END TEST dd_bs_lt_native_bs 00:08:23.530 ************************************ 00:08:23.530 00:08:23.530 real 0m0.616s 00:08:23.530 user 0m0.415s 00:08:23.530 sys 0m0.157s 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.530 11:38:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.531 ************************************ 00:08:23.531 START TEST dd_rw 00:08:23.531 ************************************ 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:23.531 11:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.468 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:24.468 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:24.468 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.468 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.468 { 00:08:24.468 "subsystems": [ 00:08:24.468 { 00:08:24.468 "subsystem": "bdev", 00:08:24.468 "config": [ 00:08:24.468 { 00:08:24.468 "params": { 00:08:24.468 "trtype": "pcie", 00:08:24.468 "traddr": "0000:00:10.0", 00:08:24.468 "name": "Nvme0" 00:08:24.468 }, 00:08:24.468 "method": "bdev_nvme_attach_controller" 00:08:24.468 }, 00:08:24.468 { 00:08:24.468 "method": "bdev_wait_for_examine" 00:08:24.468 } 00:08:24.468 ] 00:08:24.468 } 00:08:24.468 ] 00:08:24.468 } 00:08:24.468 [2024-11-28 11:38:54.292924] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:24.468 [2024-11-28 11:38:54.293498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73782 ] 00:08:24.468 [2024-11-28 11:38:54.420819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.468 [2024-11-28 11:38:54.449417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.468 [2024-11-28 11:38:54.496158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.468 [2024-11-28 11:38:54.551625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.728  [2024-11-28T11:38:54.854Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:24.728 00:08:24.986 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:24.986 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:24.986 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.986 11:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.986 { 00:08:24.986 "subsystems": [ 00:08:24.986 { 00:08:24.986 "subsystem": "bdev", 00:08:24.986 "config": [ 00:08:24.986 { 00:08:24.986 "params": { 00:08:24.986 "trtype": "pcie", 00:08:24.986 "traddr": "0000:00:10.0", 00:08:24.986 "name": "Nvme0" 00:08:24.986 }, 00:08:24.986 "method": "bdev_nvme_attach_controller" 00:08:24.986 }, 00:08:24.986 { 00:08:24.986 "method": "bdev_wait_for_examine" 00:08:24.986 } 00:08:24.986 ] 00:08:24.986 } 00:08:24.986 ] 00:08:24.986 } 00:08:24.986 [2024-11-28 11:38:54.915930] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:24.986 [2024-11-28 11:38:54.916983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73796 ] 00:08:24.986 [2024-11-28 11:38:55.043763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.986 [2024-11-28 11:38:55.069879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.986 [2024-11-28 11:38:55.109239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.243 [2024-11-28 11:38:55.170236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.243  [2024-11-28T11:38:55.627Z] Copying: 60/60 [kB] (average 14 MBps) 00:08:25.501 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:25.501 11:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:25.501 [2024-11-28 11:38:55.529809] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:25.501 [2024-11-28 11:38:55.529912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:08:25.501 { 00:08:25.501 "subsystems": [ 00:08:25.501 { 00:08:25.501 "subsystem": "bdev", 00:08:25.501 "config": [ 00:08:25.501 { 00:08:25.501 "params": { 00:08:25.501 "trtype": "pcie", 00:08:25.501 "traddr": "0000:00:10.0", 00:08:25.501 "name": "Nvme0" 00:08:25.501 }, 00:08:25.501 "method": "bdev_nvme_attach_controller" 00:08:25.501 }, 00:08:25.501 { 00:08:25.501 "method": "bdev_wait_for_examine" 00:08:25.501 } 00:08:25.501 ] 00:08:25.501 } 00:08:25.501 ] 00:08:25.501 } 00:08:25.759 [2024-11-28 11:38:55.651037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.759 [2024-11-28 11:38:55.676594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.759 [2024-11-28 11:38:55.717561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.759 [2024-11-28 11:38:55.772602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.017  [2024-11-28T11:38:56.143Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:26.017 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:26.017 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.583 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:26.583 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:26.583 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:26.583 11:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.841 { 00:08:26.841 "subsystems": [ 00:08:26.841 { 00:08:26.841 "subsystem": "bdev", 00:08:26.841 "config": [ 00:08:26.841 { 00:08:26.841 "params": { 00:08:26.841 "trtype": "pcie", 00:08:26.841 "traddr": "0000:00:10.0", 00:08:26.841 "name": "Nvme0" 00:08:26.841 }, 00:08:26.841 "method": "bdev_nvme_attach_controller" 00:08:26.841 }, 00:08:26.841 { 00:08:26.841 "method": "bdev_wait_for_examine" 00:08:26.841 } 00:08:26.841 ] 00:08:26.841 } 00:08:26.841 ] 00:08:26.841 } 00:08:26.841 [2024-11-28 11:38:56.727374] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:26.841 [2024-11-28 11:38:56.727475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73836 ] 00:08:26.841 [2024-11-28 11:38:56.852381] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.841 [2024-11-28 11:38:56.881091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.841 [2024-11-28 11:38:56.935414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.099 [2024-11-28 11:38:56.998283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.099  [2024-11-28T11:38:57.483Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:27.357 00:08:27.357 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:27.357 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:27.357 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.357 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.357 { 00:08:27.357 "subsystems": [ 00:08:27.357 { 00:08:27.357 "subsystem": "bdev", 00:08:27.357 "config": [ 00:08:27.357 { 00:08:27.357 "params": { 00:08:27.357 "trtype": "pcie", 00:08:27.357 "traddr": "0000:00:10.0", 00:08:27.357 "name": "Nvme0" 00:08:27.357 }, 00:08:27.357 "method": "bdev_nvme_attach_controller" 00:08:27.357 }, 00:08:27.357 { 00:08:27.357 "method": "bdev_wait_for_examine" 00:08:27.357 } 00:08:27.357 ] 00:08:27.357 } 00:08:27.357 ] 00:08:27.357 } 00:08:27.357 [2024-11-28 11:38:57.352345] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:27.357 [2024-11-28 11:38:57.352439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73849 ] 00:08:27.357 [2024-11-28 11:38:57.478406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.616 [2024-11-28 11:38:57.505403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.616 [2024-11-28 11:38:57.557294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.616 [2024-11-28 11:38:57.610986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.616  [2024-11-28T11:38:58.001Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:27.875 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.875 11:38:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.875 { 00:08:27.875 "subsystems": [ 00:08:27.875 { 00:08:27.875 "subsystem": "bdev", 00:08:27.875 "config": [ 00:08:27.875 { 00:08:27.875 "params": { 00:08:27.875 "trtype": "pcie", 00:08:27.875 "traddr": "0000:00:10.0", 00:08:27.875 "name": "Nvme0" 00:08:27.875 }, 00:08:27.875 "method": "bdev_nvme_attach_controller" 00:08:27.875 }, 00:08:27.875 { 00:08:27.875 "method": "bdev_wait_for_examine" 00:08:27.875 } 00:08:27.875 ] 00:08:27.875 } 00:08:27.875 ] 00:08:27.875 } 00:08:27.875 [2024-11-28 11:38:57.975884] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:27.875 [2024-11-28 11:38:57.976001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:08:28.135 [2024-11-28 11:38:58.100978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.135 [2024-11-28 11:38:58.127223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.135 [2024-11-28 11:38:58.166358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.135 [2024-11-28 11:38:58.224084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.395  [2024-11-28T11:38:58.521Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:28.395 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:28.655 11:38:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.226 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:29.226 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:29.226 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:29.226 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.226 [2024-11-28 11:38:59.139798] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:29.226 [2024-11-28 11:38:59.139921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73884 ] 00:08:29.226 { 00:08:29.226 "subsystems": [ 00:08:29.226 { 00:08:29.226 "subsystem": "bdev", 00:08:29.226 "config": [ 00:08:29.226 { 00:08:29.226 "params": { 00:08:29.226 "trtype": "pcie", 00:08:29.226 "traddr": "0000:00:10.0", 00:08:29.226 "name": "Nvme0" 00:08:29.226 }, 00:08:29.226 "method": "bdev_nvme_attach_controller" 00:08:29.226 }, 00:08:29.226 { 00:08:29.226 "method": "bdev_wait_for_examine" 00:08:29.226 } 00:08:29.226 ] 00:08:29.226 } 00:08:29.226 ] 00:08:29.226 } 00:08:29.226 [2024-11-28 11:38:59.266491] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.226 [2024-11-28 11:38:59.292119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.226 [2024-11-28 11:38:59.342949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.485 [2024-11-28 11:38:59.397747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.485  [2024-11-28T11:38:59.870Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:29.744 00:08:29.744 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:29.744 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:29.744 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:29.744 11:38:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.744 [2024-11-28 11:38:59.755601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:29.744 [2024-11-28 11:38:59.755712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73903 ] 00:08:29.744 { 00:08:29.744 "subsystems": [ 00:08:29.744 { 00:08:29.744 "subsystem": "bdev", 00:08:29.744 "config": [ 00:08:29.744 { 00:08:29.744 "params": { 00:08:29.744 "trtype": "pcie", 00:08:29.744 "traddr": "0000:00:10.0", 00:08:29.744 "name": "Nvme0" 00:08:29.744 }, 00:08:29.744 "method": "bdev_nvme_attach_controller" 00:08:29.744 }, 00:08:29.744 { 00:08:29.745 "method": "bdev_wait_for_examine" 00:08:29.745 } 00:08:29.745 ] 00:08:29.745 } 00:08:29.745 ] 00:08:29.745 } 00:08:30.003 [2024-11-28 11:38:59.882388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.003 [2024-11-28 11:38:59.915168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.003 [2024-11-28 11:38:59.971514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.003 [2024-11-28 11:39:00.031909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.263  [2024-11-28T11:39:00.389Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:30.263 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:30.263 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:30.522 [2024-11-28 11:39:00.404957] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:30.522 [2024-11-28 11:39:00.405087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73917 ] 00:08:30.522 { 00:08:30.522 "subsystems": [ 00:08:30.522 { 00:08:30.522 "subsystem": "bdev", 00:08:30.522 "config": [ 00:08:30.522 { 00:08:30.522 "params": { 00:08:30.522 "trtype": "pcie", 00:08:30.522 "traddr": "0000:00:10.0", 00:08:30.522 "name": "Nvme0" 00:08:30.522 }, 00:08:30.522 "method": "bdev_nvme_attach_controller" 00:08:30.522 }, 00:08:30.522 { 00:08:30.522 "method": "bdev_wait_for_examine" 00:08:30.522 } 00:08:30.522 ] 00:08:30.522 } 00:08:30.522 ] 00:08:30.522 } 00:08:30.522 [2024-11-28 11:39:00.531375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.522 [2024-11-28 11:39:00.556822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.522 [2024-11-28 11:39:00.610585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.781 [2024-11-28 11:39:00.669690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.781  [2024-11-28T11:39:01.178Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:31.052 00:08:31.052 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:31.052 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:31.053 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:31.053 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:31.053 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:31.053 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:31.053 11:39:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.730 11:39:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:31.730 11:39:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:31.730 11:39:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:31.730 11:39:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.730 [2024-11-28 11:39:01.606067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:31.730 [2024-11-28 11:39:01.606194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73938 ] 00:08:31.730 { 00:08:31.730 "subsystems": [ 00:08:31.730 { 00:08:31.730 "subsystem": "bdev", 00:08:31.730 "config": [ 00:08:31.730 { 00:08:31.730 "params": { 00:08:31.730 "trtype": "pcie", 00:08:31.730 "traddr": "0000:00:10.0", 00:08:31.730 "name": "Nvme0" 00:08:31.730 }, 00:08:31.730 "method": "bdev_nvme_attach_controller" 00:08:31.730 }, 00:08:31.730 { 00:08:31.730 "method": "bdev_wait_for_examine" 00:08:31.730 } 00:08:31.730 ] 00:08:31.730 } 00:08:31.730 ] 00:08:31.730 } 00:08:31.730 [2024-11-28 11:39:01.735603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.730 [2024-11-28 11:39:01.761628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.730 [2024-11-28 11:39:01.814128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.989 [2024-11-28 11:39:01.871273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.989  [2024-11-28T11:39:02.374Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:32.248 00:08:32.248 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:32.248 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:32.248 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.248 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.248 { 00:08:32.248 "subsystems": [ 00:08:32.248 { 00:08:32.248 "subsystem": "bdev", 00:08:32.248 "config": [ 00:08:32.248 { 00:08:32.248 "params": { 00:08:32.248 "trtype": "pcie", 00:08:32.248 "traddr": "0000:00:10.0", 00:08:32.248 "name": "Nvme0" 00:08:32.248 }, 00:08:32.248 "method": "bdev_nvme_attach_controller" 00:08:32.248 }, 00:08:32.248 { 00:08:32.248 "method": "bdev_wait_for_examine" 00:08:32.248 } 00:08:32.248 ] 00:08:32.248 } 00:08:32.248 ] 00:08:32.248 } 00:08:32.248 [2024-11-28 11:39:02.247281] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:32.248 [2024-11-28 11:39:02.247554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73951 ] 00:08:32.508 [2024-11-28 11:39:02.375413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.508 [2024-11-28 11:39:02.402537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.508 [2024-11-28 11:39:02.456274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.508 [2024-11-28 11:39:02.514750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.508  [2024-11-28T11:39:02.894Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:32.768 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.768 11:39:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.768 [2024-11-28 11:39:02.874000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:32.768 [2024-11-28 11:39:02.874118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73972 ] 00:08:32.768 { 00:08:32.768 "subsystems": [ 00:08:32.768 { 00:08:32.768 "subsystem": "bdev", 00:08:32.768 "config": [ 00:08:32.768 { 00:08:32.768 "params": { 00:08:32.768 "trtype": "pcie", 00:08:32.768 "traddr": "0000:00:10.0", 00:08:32.768 "name": "Nvme0" 00:08:32.768 }, 00:08:32.768 "method": "bdev_nvme_attach_controller" 00:08:32.768 }, 00:08:32.768 { 00:08:32.768 "method": "bdev_wait_for_examine" 00:08:32.768 } 00:08:32.768 ] 00:08:32.768 } 00:08:32.768 ] 00:08:32.768 } 00:08:33.027 [2024-11-28 11:39:03.001344] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.027 [2024-11-28 11:39:03.029756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.027 [2024-11-28 11:39:03.081433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.027 [2024-11-28 11:39:03.139149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.286  [2024-11-28T11:39:03.672Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:33.546 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:33.546 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:33.805 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:33.805 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:33.805 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:33.805 11:39:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.065 [2024-11-28 11:39:03.946018] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:34.065 [2024-11-28 11:39:03.946388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:08:34.065 { 00:08:34.065 "subsystems": [ 00:08:34.065 { 00:08:34.065 "subsystem": "bdev", 00:08:34.065 "config": [ 00:08:34.065 { 00:08:34.065 "params": { 00:08:34.065 "trtype": "pcie", 00:08:34.065 "traddr": "0000:00:10.0", 00:08:34.065 "name": "Nvme0" 00:08:34.065 }, 00:08:34.065 "method": "bdev_nvme_attach_controller" 00:08:34.065 }, 00:08:34.065 { 00:08:34.065 "method": "bdev_wait_for_examine" 00:08:34.065 } 00:08:34.065 ] 00:08:34.065 } 00:08:34.065 ] 00:08:34.065 } 00:08:34.065 [2024-11-28 11:39:04.072516] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.065 [2024-11-28 11:39:04.100605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.065 [2024-11-28 11:39:04.146640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.324 [2024-11-28 11:39:04.204895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.324  [2024-11-28T11:39:04.710Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:34.584 00:08:34.584 11:39:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:34.584 11:39:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:34.584 11:39:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:34.584 11:39:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 { 00:08:34.584 "subsystems": [ 00:08:34.584 { 00:08:34.584 "subsystem": "bdev", 00:08:34.584 "config": [ 00:08:34.584 { 00:08:34.584 "params": { 00:08:34.584 "trtype": "pcie", 00:08:34.584 "traddr": "0000:00:10.0", 00:08:34.584 "name": "Nvme0" 00:08:34.584 }, 00:08:34.584 "method": "bdev_nvme_attach_controller" 00:08:34.584 }, 00:08:34.584 { 00:08:34.584 "method": "bdev_wait_for_examine" 00:08:34.584 } 00:08:34.584 ] 00:08:34.584 } 00:08:34.584 ] 00:08:34.584 } 00:08:34.584 [2024-11-28 11:39:04.564939] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:34.584 [2024-11-28 11:39:04.565022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73999 ] 00:08:34.584 [2024-11-28 11:39:04.691185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.843 [2024-11-28 11:39:04.718833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.843 [2024-11-28 11:39:04.771532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.843 [2024-11-28 11:39:04.829033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.843  [2024-11-28T11:39:05.228Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:35.102 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:35.102 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:35.102 { 00:08:35.102 "subsystems": [ 00:08:35.102 { 00:08:35.102 "subsystem": "bdev", 00:08:35.102 "config": [ 00:08:35.102 { 00:08:35.102 "params": { 00:08:35.102 "trtype": "pcie", 00:08:35.102 "traddr": "0000:00:10.0", 00:08:35.102 "name": "Nvme0" 00:08:35.102 }, 00:08:35.102 "method": "bdev_nvme_attach_controller" 00:08:35.102 }, 00:08:35.102 { 00:08:35.102 "method": "bdev_wait_for_examine" 00:08:35.102 } 00:08:35.102 ] 00:08:35.102 } 00:08:35.102 ] 00:08:35.102 } 00:08:35.102 [2024-11-28 11:39:05.196992] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:35.102 [2024-11-28 11:39:05.197112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74020 ] 00:08:35.361 [2024-11-28 11:39:05.334517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.361 [2024-11-28 11:39:05.361358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.362 [2024-11-28 11:39:05.405835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.362 [2024-11-28 11:39:05.464451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.621  [2024-11-28T11:39:06.007Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:35.881 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:35.881 11:39:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.141 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:36.141 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:36.141 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.141 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.141 [2024-11-28 11:39:06.263508] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:36.141 [2024-11-28 11:39:06.263627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74039 ] 00:08:36.141 { 00:08:36.141 "subsystems": [ 00:08:36.141 { 00:08:36.141 "subsystem": "bdev", 00:08:36.141 "config": [ 00:08:36.141 { 00:08:36.141 "params": { 00:08:36.141 "trtype": "pcie", 00:08:36.141 "traddr": "0000:00:10.0", 00:08:36.141 "name": "Nvme0" 00:08:36.141 }, 00:08:36.141 "method": "bdev_nvme_attach_controller" 00:08:36.141 }, 00:08:36.141 { 00:08:36.141 "method": "bdev_wait_for_examine" 00:08:36.141 } 00:08:36.141 ] 00:08:36.141 } 00:08:36.141 ] 00:08:36.141 } 00:08:36.401 [2024-11-28 11:39:06.389554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.401 [2024-11-28 11:39:06.418370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.401 [2024-11-28 11:39:06.462756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.401 [2024-11-28 11:39:06.519554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.662  [2024-11-28T11:39:07.047Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:36.921 00:08:36.921 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:36.921 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:36.921 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.921 11:39:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 [2024-11-28 11:39:06.878545] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:36.921 [2024-11-28 11:39:06.878656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74058 ] 00:08:36.921 { 00:08:36.921 "subsystems": [ 00:08:36.921 { 00:08:36.921 "subsystem": "bdev", 00:08:36.921 "config": [ 00:08:36.921 { 00:08:36.921 "params": { 00:08:36.921 "trtype": "pcie", 00:08:36.921 "traddr": "0000:00:10.0", 00:08:36.921 "name": "Nvme0" 00:08:36.921 }, 00:08:36.921 "method": "bdev_nvme_attach_controller" 00:08:36.921 }, 00:08:36.921 { 00:08:36.921 "method": "bdev_wait_for_examine" 00:08:36.921 } 00:08:36.921 ] 00:08:36.921 } 00:08:36.921 ] 00:08:36.921 } 00:08:36.921 [2024-11-28 11:39:07.006193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.921 [2024-11-28 11:39:07.032901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.180 [2024-11-28 11:39:07.077462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.180 [2024-11-28 11:39:07.134483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.180  [2024-11-28T11:39:07.568Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:37.442 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:37.442 11:39:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.442 { 00:08:37.442 "subsystems": [ 00:08:37.442 { 00:08:37.442 "subsystem": "bdev", 00:08:37.442 "config": [ 00:08:37.442 { 00:08:37.442 "params": { 00:08:37.442 "trtype": "pcie", 00:08:37.442 "traddr": "0000:00:10.0", 00:08:37.442 "name": "Nvme0" 00:08:37.442 }, 00:08:37.442 "method": "bdev_nvme_attach_controller" 00:08:37.442 }, 00:08:37.442 { 00:08:37.442 "method": "bdev_wait_for_examine" 00:08:37.442 } 00:08:37.442 ] 00:08:37.442 } 00:08:37.442 ] 00:08:37.442 } 00:08:37.442 [2024-11-28 11:39:07.488622] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:37.442 [2024-11-28 11:39:07.488722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:08:37.703 [2024-11-28 11:39:07.614681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:37.703 [2024-11-28 11:39:07.646759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.703 [2024-11-28 11:39:07.699704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.703 [2024-11-28 11:39:07.761412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.963  [2024-11-28T11:39:08.089Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:37.963 00:08:37.963 00:08:37.963 real 0m14.421s 00:08:37.963 user 0m10.311s 00:08:37.963 sys 0m5.742s 00:08:37.963 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.963 ************************************ 00:08:37.963 END TEST dd_rw 00:08:37.963 ************************************ 00:08:37.963 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 ************************************ 00:08:38.224 START TEST dd_rw_offset 00:08:38.224 ************************************ 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=4oiit6kjkbg9ftennvfih37j5mtpmvegj2vc64ej923g9f875x7369oj17jte9qh4een4srpiqdiandua3cc0dfaz5f614dp93ile6xv606prpgywi4m9y3rkczbpzs4qtgd8txj5jp9c5vg1n8goz6mrx6jdjvwq6pwhy26zet7gwrzxyxh7zcywh28fwnxr0v671kxbsxkgf82siolnp02067qjbxvdtphf44w5a75ppd3kpli7ea08vftvy2hu94msgsli989fokuodhenwdv1qnlb94ft6i922027tiac7uim8hi7oae57loap9zqpvngcsll17etw7v0y3z8lmeki0ta94wmouyn9yhii02ovr2bow26aojypna65ruti6osr6nt808vogzablitqth2c18ttj8vtwtn6371n6utt5sk4gwgog5mwzm4kemc6bp2xlpl5h6si5sd24qpogqv8icpntawcc2y81m7r17t12acwyi0vq7p0xjcyqnco7bqunvgxvhezugif8anliu4md9gt48pe58v5em4eaa82qite8icnxi4qaboyz8sukzi87d9mme3uk6078y8kfsi8ko55bb3usx9wh2ljac5voqnpdk6zb909ywpgu83aopm9ezr1ogmhzomro9qmksa2d9udoomenk3pupf0ic7avsj2xshzlzk8xgk29s2fmi5m6g87b687k5qg856w8ayg5dlgiwzbbkbv0welfhv4876tffemlsqf1y1v1w9dp0jw4qtp0amemgli6i6rarkjpohnxa37gmjehws2ka31kl6r3c37x22t6104p16vecvikvey91h53ijgysbihy1tn30z0plyozby1ep7rp4z7tszgvmmptjt4wydcuskfagy2cuwl8xspr35r27ce4i0enr6p2wg5e2f9gp7qvq7d4dq25of92rpoveqha5np1bik6ntfh9cvjszpbsn8sxt7xdtqzr0kzp3wmo5hseeawhdeyw54912n89jw8bppy465x0pj6xw16agpfw2r0jjmaqi7kp51l6itureopvvnmrw1rpgh3umdilgdvijn0kkbl7z3nyxktv1goy98mc0g827h8yfat2v4aqw4bhjnydb8hph8vj685lnn6o40hzo6icvdohqnhgbe6tsovein4uk1bagswn3hkxlr2l1akrvfjot1bzn6cu8fg81kv688m9awc4uxw1izwcdnus0kuggomx3zx3k0b24x6kbthphlbmd79uboibuxkbtzscra56ixjkx2ncs999hrfsu0krwzu4fwd5kuxt1b4yww9a82v8a2ie5w64qb4obfbv2ijitqxue1k6gcbqrnf4ujiddqv6qw8uh9nvi533bfq9w5qvnj1irabdes185u9vinda53ekk6nfm6ayfc25m3iq8xyw9kk4nzoaonb6tox43earn4tork7195nsjs9q7p6tefrmzgwgp4gfejwwa17nl26854qvey2nsntt0wzrc8ilu937sgugxb7w7vlm5y7j8drxhvya1zhj4wukpa90rnv4mks8ig2bewnpsw7fvzy84t8dmr8hfwzzgp94l6i5phd6fklco07gdp3po869t5yeuiljsvj90hk0urbnqm6nnn0kc9ggc98ndqngst5tkv4yljn5dgklniblwdtdvwu01276b97s0chwlt74117b0t1mgarjc8yj4ojw2bsxecg5e3hb6xfx94t1ji6s92etrj9ow9ja1ch3kn1lkct3yqnxke2l92ulcc3nhucf2mxsrfp5g6lu5x5316e6ohgmgj4hv8icg0bl1p5lpgqztqqso78e568g6jd8x7ez3a2lvlqup9sn2gat30veh68oirzv2cnedhl825imautn6ctht2nm8scjd883vrgn1d16wesh9p7n3m5taf2xhau4i10u7hxct5yaksrlc54ag7lxtwe0ydlyadlmkbgxpaj62vd20byuor8wggy5puz0nevabasy7uo9z9nnlsr96qfkq2r0eu1u6l7c63k31wwm20dmqw0xij9eqmvlv1ud764og4hsjn4rduq2ahzaqt9kj9zqqcyp5o97u99krvct9e8l7vqumcnkbevjfrer1ruilihre1usj0d14qlybio2554piz8oil9dh6ctvhojraq6xotoqcnjsnwhbufjeenbfcl2neubf93cjh5s3xnclj9shsf6iei5yptugls6c34grcs7zi0lz6rz8as6mwueorux6w9hmtr6a0jld3mnc4xey9kf1xqsd7h6mrud2djqp2fru7dazjxl1s5uo76m4jfnkgt8yif25p7k957pnjqyeh9g36yumfqwxw4gspal990wt1f5mce3v57pzo3ceymm763j773k42ncpd26xww76iic2quwr75elufdsh8p3ey5p946bw3293wenomlx8bn7pigo51s8816c24k5uf8ayo4q04m4s2b07gt5wtlul8r83xmvuv58ufemkjhhlpucexn8t5ac7pfq8dz44ntx0f28f56tyrew2yz2rhe84b5jnmjqba1xvq7jfboagepvnu5nxsb1avmextbv7f0km757czdg57mikuuifweo3dv6nsibatsewjbcvsufubq2yhrnkc5gas5296mk8roq8qq2f1zax8xd3r0waqemtxy23aic46u0n2jlhxppx1wpmu8lovzrnf6om35oegu8q4wbxc0w1lh7emttsq3sintrmlpr4cnylwl7ge83hco7t0obns4cbjjrpl9vmrfyxldcgxs020wjnkxcf51u4ckwqtacpvfdcl4mxxsreyf98nmbiaam5szwk27w7c4m1tbn9y4crvh2voif6mocyl5gu77i394gqlc50expdn53tkwu0l5yinyeskp98rt4giejc0kco9r923r2qnp0ou2d6c9d5uo6y3py9to8lts8w3rfve7kzsfnt48qdse6dgw6xyn3ut97pshgetrqh2idtnnvlcdzpwl603zalgavp0m6t626kvb7rfnkqn65gquenl8g2njcquljobtu37tgfyizqb68x7wirun7rjkahua4zqkar9qeooczw2x3i9hrc4n9o2zlk1ensjqi7w2p5oycb7gtdi739n6smsi9q31up289jkmah0qxpt1suxkjhxmmotr12pj9cp0aovjtk5fg9z9i2wm6q2p5bg29nsefdamjnucsiwl3uw148jm8u359jri644rc8xe7l576ixqso2p0e4v7a3kmxsv6vombdg6lmxi0hsvwv5r6we0wpfppgnk8odtk0sq0rtcvm4843ulujzz5k9m0k7q95963bs948qb0et3wenycglyyi656f0bxmydcxe23bttfwqga0id7vwpmsenvvhudvakw67r5qvslvsmdogjw7cwcxcfelfry2z9dyrlilyiffr10zly7urtyu2zxr4353mu0370ifjo5s4rs8eblr7iz7zzdhpctsy2iu8fao2wv0kf22vmfreub8qu7sba3g0wk1k04067afvpr6x8659pioq9rtg83qv539rcilznnj51uvvhr4r0aiph6tmhi9bqckxci2h4rqmldwzcn0g45xyqz54n20x5sg5ufg7z5htj5ucoyqin5t9dhekodtw5lsi5ryyjf7z00ni5qbnm31z72klhijn9jhxlavf15tn8qhg1xtzexl3iae5ywhfqtky7k5h5vao33ght3yfndewjmok2kcsnfenmxv171pyqckodzbaymcb77sghjknm4bg8p54g6qrrwmm0610jpg40rwcocsytolpjtbqaxp300leqb8dlyyzhjw54ivhbwldasetyh0zii1fzo9ux5vxq0g01fepncacvrv8krcdjtvh9cpmlu0il20gnrx23arceyucbq9dbmf7n71w1538uifv3sj95ul32g3h8ir3cf40fqnban9lxippkfv1g9juvpy9mduprvjvkqxw3sehg2c3fwym7xtx75cw6ls66rkvhmhwke17sfl8osoooc61xh1dljc6pbr0ok10 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:38.224 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 { 00:08:38.225 "subsystems": [ 00:08:38.225 { 00:08:38.225 "subsystem": "bdev", 00:08:38.225 "config": [ 00:08:38.225 { 00:08:38.225 "params": { 00:08:38.225 "trtype": "pcie", 00:08:38.225 "traddr": "0000:00:10.0", 00:08:38.225 "name": "Nvme0" 00:08:38.225 }, 00:08:38.225 "method": "bdev_nvme_attach_controller" 00:08:38.225 }, 00:08:38.225 { 00:08:38.225 "method": "bdev_wait_for_examine" 00:08:38.225 } 00:08:38.225 ] 00:08:38.225 } 00:08:38.225 ] 00:08:38.225 } 00:08:38.225 [2024-11-28 11:39:08.234463] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:38.225 [2024-11-28 11:39:08.234605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74104 ] 00:08:38.484 [2024-11-28 11:39:08.361173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.484 [2024-11-28 11:39:08.385441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.484 [2024-11-28 11:39:08.437771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.484 [2024-11-28 11:39:08.500093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.744  [2024-11-28T11:39:08.870Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:38.744 00:08:38.744 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:38.744 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:38.744 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:38.744 11:39:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:38.744 { 00:08:38.744 "subsystems": [ 00:08:38.744 { 00:08:38.744 "subsystem": "bdev", 00:08:38.744 "config": [ 00:08:38.744 { 00:08:38.744 "params": { 00:08:38.744 "trtype": "pcie", 00:08:38.744 "traddr": "0000:00:10.0", 00:08:38.744 "name": "Nvme0" 00:08:38.744 }, 00:08:38.744 "method": "bdev_nvme_attach_controller" 00:08:38.744 }, 00:08:38.744 { 00:08:38.744 "method": "bdev_wait_for_examine" 00:08:38.744 } 00:08:38.744 ] 00:08:38.744 } 00:08:38.744 ] 00:08:38.744 } 00:08:38.744 [2024-11-28 11:39:08.855580] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:38.744 [2024-11-28 11:39:08.855728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74123 ] 00:08:39.004 [2024-11-28 11:39:08.982153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.004 [2024-11-28 11:39:09.005775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.004 [2024-11-28 11:39:09.042277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.004 [2024-11-28 11:39:09.097539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.263  [2024-11-28T11:39:09.649Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:39.523 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:39.523 ************************************ 00:08:39.523 END TEST dd_rw_offset 00:08:39.523 ************************************ 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 4oiit6kjkbg9ftennvfih37j5mtpmvegj2vc64ej923g9f875x7369oj17jte9qh4een4srpiqdiandua3cc0dfaz5f614dp93ile6xv606prpgywi4m9y3rkczbpzs4qtgd8txj5jp9c5vg1n8goz6mrx6jdjvwq6pwhy26zet7gwrzxyxh7zcywh28fwnxr0v671kxbsxkgf82siolnp02067qjbxvdtphf44w5a75ppd3kpli7ea08vftvy2hu94msgsli989fokuodhenwdv1qnlb94ft6i922027tiac7uim8hi7oae57loap9zqpvngcsll17etw7v0y3z8lmeki0ta94wmouyn9yhii02ovr2bow26aojypna65ruti6osr6nt808vogzablitqth2c18ttj8vtwtn6371n6utt5sk4gwgog5mwzm4kemc6bp2xlpl5h6si5sd24qpogqv8icpntawcc2y81m7r17t12acwyi0vq7p0xjcyqnco7bqunvgxvhezugif8anliu4md9gt48pe58v5em4eaa82qite8icnxi4qaboyz8sukzi87d9mme3uk6078y8kfsi8ko55bb3usx9wh2ljac5voqnpdk6zb909ywpgu83aopm9ezr1ogmhzomro9qmksa2d9udoomenk3pupf0ic7avsj2xshzlzk8xgk29s2fmi5m6g87b687k5qg856w8ayg5dlgiwzbbkbv0welfhv4876tffemlsqf1y1v1w9dp0jw4qtp0amemgli6i6rarkjpohnxa37gmjehws2ka31kl6r3c37x22t6104p16vecvikvey91h53ijgysbihy1tn30z0plyozby1ep7rp4z7tszgvmmptjt4wydcuskfagy2cuwl8xspr35r27ce4i0enr6p2wg5e2f9gp7qvq7d4dq25of92rpoveqha5np1bik6ntfh9cvjszpbsn8sxt7xdtqzr0kzp3wmo5hseeawhdeyw54912n89jw8bppy465x0pj6xw16agpfw2r0jjmaqi7kp51l6itureopvvnmrw1rpgh3umdilgdvijn0kkbl7z3nyxktv1goy98mc0g827h8yfat2v4aqw4bhjnydb8hph8vj685lnn6o40hzo6icvdohqnhgbe6tsovein4uk1bagswn3hkxlr2l1akrvfjot1bzn6cu8fg81kv688m9awc4uxw1izwcdnus0kuggomx3zx3k0b24x6kbthphlbmd79uboibuxkbtzscra56ixjkx2ncs999hrfsu0krwzu4fwd5kuxt1b4yww9a82v8a2ie5w64qb4obfbv2ijitqxue1k6gcbqrnf4ujiddqv6qw8uh9nvi533bfq9w5qvnj1irabdes185u9vinda53ekk6nfm6ayfc25m3iq8xyw9kk4nzoaonb6tox43earn4tork7195nsjs9q7p6tefrmzgwgp4gfejwwa17nl26854qvey2nsntt0wzrc8ilu937sgugxb7w7vlm5y7j8drxhvya1zhj4wukpa90rnv4mks8ig2bewnpsw7fvzy84t8dmr8hfwzzgp94l6i5phd6fklco07gdp3po869t5yeuiljsvj90hk0urbnqm6nnn0kc9ggc98ndqngst5tkv4yljn5dgklniblwdtdvwu01276b97s0chwlt74117b0t1mgarjc8yj4ojw2bsxecg5e3hb6xfx94t1ji6s92etrj9ow9ja1ch3kn1lkct3yqnxke2l92ulcc3nhucf2mxsrfp5g6lu5x5316e6ohgmgj4hv8icg0bl1p5lpgqztqqso78e568g6jd8x7ez3a2lvlqup9sn2gat30veh68oirzv2cnedhl825imautn6ctht2nm8scjd883vrgn1d16wesh9p7n3m5taf2xhau4i10u7hxct5yaksrlc54ag7lxtwe0ydlyadlmkbgxpaj62vd20byuor8wggy5puz0nevabasy7uo9z9nnlsr96qfkq2r0eu1u6l7c63k31wwm20dmqw0xij9eqmvlv1ud764og4hsjn4rduq2ahzaqt9kj9zqqcyp5o97u99krvct9e8l7vqumcnkbevjfrer1ruilihre1usj0d14qlybio2554piz8oil9dh6ctvhojraq6xotoqcnjsnwhbufjeenbfcl2neubf93cjh5s3xnclj9shsf6iei5yptugls6c34grcs7zi0lz6rz8as6mwueorux6w9hmtr6a0jld3mnc4xey9kf1xqsd7h6mrud2djqp2fru7dazjxl1s5uo76m4jfnkgt8yif25p7k957pnjqyeh9g36yumfqwxw4gspal990wt1f5mce3v57pzo3ceymm763j773k42ncpd26xww76iic2quwr75elufdsh8p3ey5p946bw3293wenomlx8bn7pigo51s8816c24k5uf8ayo4q04m4s2b07gt5wtlul8r83xmvuv58ufemkjhhlpucexn8t5ac7pfq8dz44ntx0f28f56tyrew2yz2rhe84b5jnmjqba1xvq7jfboagepvnu5nxsb1avmextbv7f0km757czdg57mikuuifweo3dv6nsibatsewjbcvsufubq2yhrnkc5gas5296mk8roq8qq2f1zax8xd3r0waqemtxy23aic46u0n2jlhxppx1wpmu8lovzrnf6om35oegu8q4wbxc0w1lh7emttsq3sintrmlpr4cnylwl7ge83hco7t0obns4cbjjrpl9vmrfyxldcgxs020wjnkxcf51u4ckwqtacpvfdcl4mxxsreyf98nmbiaam5szwk27w7c4m1tbn9y4crvh2voif6mocyl5gu77i394gqlc50expdn53tkwu0l5yinyeskp98rt4giejc0kco9r923r2qnp0ou2d6c9d5uo6y3py9to8lts8w3rfve7kzsfnt48qdse6dgw6xyn3ut97pshgetrqh2idtnnvlcdzpwl603zalgavp0m6t626kvb7rfnkqn65gquenl8g2njcquljobtu37tgfyizqb68x7wirun7rjkahua4zqkar9qeooczw2x3i9hrc4n9o2zlk1ensjqi7w2p5oycb7gtdi739n6smsi9q31up289jkmah0qxpt1suxkjhxmmotr12pj9cp0aovjtk5fg9z9i2wm6q2p5bg29nsefdamjnucsiwl3uw148jm8u359jri644rc8xe7l576ixqso2p0e4v7a3kmxsv6vombdg6lmxi0hsvwv5r6we0wpfppgnk8odtk0sq0rtcvm4843ulujzz5k9m0k7q95963bs948qb0et3wenycglyyi656f0bxmydcxe23bttfwqga0id7vwpmsenvvhudvakw67r5qvslvsmdogjw7cwcxcfelfry2z9dyrlilyiffr10zly7urtyu2zxr4353mu0370ifjo5s4rs8eblr7iz7zzdhpctsy2iu8fao2wv0kf22vmfreub8qu7sba3g0wk1k04067afvpr6x8659pioq9rtg83qv539rcilznnj51uvvhr4r0aiph6tmhi9bqckxci2h4rqmldwzcn0g45xyqz54n20x5sg5ufg7z5htj5ucoyqin5t9dhekodtw5lsi5ryyjf7z00ni5qbnm31z72klhijn9jhxlavf15tn8qhg1xtzexl3iae5ywhfqtky7k5h5vao33ght3yfndewjmok2kcsnfenmxv171pyqckodzbaymcb77sghjknm4bg8p54g6qrrwmm0610jpg40rwcocsytolpjtbqaxp300leqb8dlyyzhjw54ivhbwldasetyh0zii1fzo9ux5vxq0g01fepncacvrv8krcdjtvh9cpmlu0il20gnrx23arceyucbq9dbmf7n71w1538uifv3sj95ul32g3h8ir3cf40fqnban9lxippkfv1g9juvpy9mduprvjvkqxw3sehg2c3fwym7xtx75cw6ls66rkvhmhwke17sfl8osoooc61xh1dljc6pbr0ok10 == \4\o\i\i\t\6\k\j\k\b\g\9\f\t\e\n\n\v\f\i\h\3\7\j\5\m\t\p\m\v\e\g\j\2\v\c\6\4\e\j\9\2\3\g\9\f\8\7\5\x\7\3\6\9\o\j\1\7\j\t\e\9\q\h\4\e\e\n\4\s\r\p\i\q\d\i\a\n\d\u\a\3\c\c\0\d\f\a\z\5\f\6\1\4\d\p\9\3\i\l\e\6\x\v\6\0\6\p\r\p\g\y\w\i\4\m\9\y\3\r\k\c\z\b\p\z\s\4\q\t\g\d\8\t\x\j\5\j\p\9\c\5\v\g\1\n\8\g\o\z\6\m\r\x\6\j\d\j\v\w\q\6\p\w\h\y\2\6\z\e\t\7\g\w\r\z\x\y\x\h\7\z\c\y\w\h\2\8\f\w\n\x\r\0\v\6\7\1\k\x\b\s\x\k\g\f\8\2\s\i\o\l\n\p\0\2\0\6\7\q\j\b\x\v\d\t\p\h\f\4\4\w\5\a\7\5\p\p\d\3\k\p\l\i\7\e\a\0\8\v\f\t\v\y\2\h\u\9\4\m\s\g\s\l\i\9\8\9\f\o\k\u\o\d\h\e\n\w\d\v\1\q\n\l\b\9\4\f\t\6\i\9\2\2\0\2\7\t\i\a\c\7\u\i\m\8\h\i\7\o\a\e\5\7\l\o\a\p\9\z\q\p\v\n\g\c\s\l\l\1\7\e\t\w\7\v\0\y\3\z\8\l\m\e\k\i\0\t\a\9\4\w\m\o\u\y\n\9\y\h\i\i\0\2\o\v\r\2\b\o\w\2\6\a\o\j\y\p\n\a\6\5\r\u\t\i\6\o\s\r\6\n\t\8\0\8\v\o\g\z\a\b\l\i\t\q\t\h\2\c\1\8\t\t\j\8\v\t\w\t\n\6\3\7\1\n\6\u\t\t\5\s\k\4\g\w\g\o\g\5\m\w\z\m\4\k\e\m\c\6\b\p\2\x\l\p\l\5\h\6\s\i\5\s\d\2\4\q\p\o\g\q\v\8\i\c\p\n\t\a\w\c\c\2\y\8\1\m\7\r\1\7\t\1\2\a\c\w\y\i\0\v\q\7\p\0\x\j\c\y\q\n\c\o\7\b\q\u\n\v\g\x\v\h\e\z\u\g\i\f\8\a\n\l\i\u\4\m\d\9\g\t\4\8\p\e\5\8\v\5\e\m\4\e\a\a\8\2\q\i\t\e\8\i\c\n\x\i\4\q\a\b\o\y\z\8\s\u\k\z\i\8\7\d\9\m\m\e\3\u\k\6\0\7\8\y\8\k\f\s\i\8\k\o\5\5\b\b\3\u\s\x\9\w\h\2\l\j\a\c\5\v\o\q\n\p\d\k\6\z\b\9\0\9\y\w\p\g\u\8\3\a\o\p\m\9\e\z\r\1\o\g\m\h\z\o\m\r\o\9\q\m\k\s\a\2\d\9\u\d\o\o\m\e\n\k\3\p\u\p\f\0\i\c\7\a\v\s\j\2\x\s\h\z\l\z\k\8\x\g\k\2\9\s\2\f\m\i\5\m\6\g\8\7\b\6\8\7\k\5\q\g\8\5\6\w\8\a\y\g\5\d\l\g\i\w\z\b\b\k\b\v\0\w\e\l\f\h\v\4\8\7\6\t\f\f\e\m\l\s\q\f\1\y\1\v\1\w\9\d\p\0\j\w\4\q\t\p\0\a\m\e\m\g\l\i\6\i\6\r\a\r\k\j\p\o\h\n\x\a\3\7\g\m\j\e\h\w\s\2\k\a\3\1\k\l\6\r\3\c\3\7\x\2\2\t\6\1\0\4\p\1\6\v\e\c\v\i\k\v\e\y\9\1\h\5\3\i\j\g\y\s\b\i\h\y\1\t\n\3\0\z\0\p\l\y\o\z\b\y\1\e\p\7\r\p\4\z\7\t\s\z\g\v\m\m\p\t\j\t\4\w\y\d\c\u\s\k\f\a\g\y\2\c\u\w\l\8\x\s\p\r\3\5\r\2\7\c\e\4\i\0\e\n\r\6\p\2\w\g\5\e\2\f\9\g\p\7\q\v\q\7\d\4\d\q\2\5\o\f\9\2\r\p\o\v\e\q\h\a\5\n\p\1\b\i\k\6\n\t\f\h\9\c\v\j\s\z\p\b\s\n\8\s\x\t\7\x\d\t\q\z\r\0\k\z\p\3\w\m\o\5\h\s\e\e\a\w\h\d\e\y\w\5\4\9\1\2\n\8\9\j\w\8\b\p\p\y\4\6\5\x\0\p\j\6\x\w\1\6\a\g\p\f\w\2\r\0\j\j\m\a\q\i\7\k\p\5\1\l\6\i\t\u\r\e\o\p\v\v\n\m\r\w\1\r\p\g\h\3\u\m\d\i\l\g\d\v\i\j\n\0\k\k\b\l\7\z\3\n\y\x\k\t\v\1\g\o\y\9\8\m\c\0\g\8\2\7\h\8\y\f\a\t\2\v\4\a\q\w\4\b\h\j\n\y\d\b\8\h\p\h\8\v\j\6\8\5\l\n\n\6\o\4\0\h\z\o\6\i\c\v\d\o\h\q\n\h\g\b\e\6\t\s\o\v\e\i\n\4\u\k\1\b\a\g\s\w\n\3\h\k\x\l\r\2\l\1\a\k\r\v\f\j\o\t\1\b\z\n\6\c\u\8\f\g\8\1\k\v\6\8\8\m\9\a\w\c\4\u\x\w\1\i\z\w\c\d\n\u\s\0\k\u\g\g\o\m\x\3\z\x\3\k\0\b\2\4\x\6\k\b\t\h\p\h\l\b\m\d\7\9\u\b\o\i\b\u\x\k\b\t\z\s\c\r\a\5\6\i\x\j\k\x\2\n\c\s\9\9\9\h\r\f\s\u\0\k\r\w\z\u\4\f\w\d\5\k\u\x\t\1\b\4\y\w\w\9\a\8\2\v\8\a\2\i\e\5\w\6\4\q\b\4\o\b\f\b\v\2\i\j\i\t\q\x\u\e\1\k\6\g\c\b\q\r\n\f\4\u\j\i\d\d\q\v\6\q\w\8\u\h\9\n\v\i\5\3\3\b\f\q\9\w\5\q\v\n\j\1\i\r\a\b\d\e\s\1\8\5\u\9\v\i\n\d\a\5\3\e\k\k\6\n\f\m\6\a\y\f\c\2\5\m\3\i\q\8\x\y\w\9\k\k\4\n\z\o\a\o\n\b\6\t\o\x\4\3\e\a\r\n\4\t\o\r\k\7\1\9\5\n\s\j\s\9\q\7\p\6\t\e\f\r\m\z\g\w\g\p\4\g\f\e\j\w\w\a\1\7\n\l\2\6\8\5\4\q\v\e\y\2\n\s\n\t\t\0\w\z\r\c\8\i\l\u\9\3\7\s\g\u\g\x\b\7\w\7\v\l\m\5\y\7\j\8\d\r\x\h\v\y\a\1\z\h\j\4\w\u\k\p\a\9\0\r\n\v\4\m\k\s\8\i\g\2\b\e\w\n\p\s\w\7\f\v\z\y\8\4\t\8\d\m\r\8\h\f\w\z\z\g\p\9\4\l\6\i\5\p\h\d\6\f\k\l\c\o\0\7\g\d\p\3\p\o\8\6\9\t\5\y\e\u\i\l\j\s\v\j\9\0\h\k\0\u\r\b\n\q\m\6\n\n\n\0\k\c\9\g\g\c\9\8\n\d\q\n\g\s\t\5\t\k\v\4\y\l\j\n\5\d\g\k\l\n\i\b\l\w\d\t\d\v\w\u\0\1\2\7\6\b\9\7\s\0\c\h\w\l\t\7\4\1\1\7\b\0\t\1\m\g\a\r\j\c\8\y\j\4\o\j\w\2\b\s\x\e\c\g\5\e\3\h\b\6\x\f\x\9\4\t\1\j\i\6\s\9\2\e\t\r\j\9\o\w\9\j\a\1\c\h\3\k\n\1\l\k\c\t\3\y\q\n\x\k\e\2\l\9\2\u\l\c\c\3\n\h\u\c\f\2\m\x\s\r\f\p\5\g\6\l\u\5\x\5\3\1\6\e\6\o\h\g\m\g\j\4\h\v\8\i\c\g\0\b\l\1\p\5\l\p\g\q\z\t\q\q\s\o\7\8\e\5\6\8\g\6\j\d\8\x\7\e\z\3\a\2\l\v\l\q\u\p\9\s\n\2\g\a\t\3\0\v\e\h\6\8\o\i\r\z\v\2\c\n\e\d\h\l\8\2\5\i\m\a\u\t\n\6\c\t\h\t\2\n\m\8\s\c\j\d\8\8\3\v\r\g\n\1\d\1\6\w\e\s\h\9\p\7\n\3\m\5\t\a\f\2\x\h\a\u\4\i\1\0\u\7\h\x\c\t\5\y\a\k\s\r\l\c\5\4\a\g\7\l\x\t\w\e\0\y\d\l\y\a\d\l\m\k\b\g\x\p\a\j\6\2\v\d\2\0\b\y\u\o\r\8\w\g\g\y\5\p\u\z\0\n\e\v\a\b\a\s\y\7\u\o\9\z\9\n\n\l\s\r\9\6\q\f\k\q\2\r\0\e\u\1\u\6\l\7\c\6\3\k\3\1\w\w\m\2\0\d\m\q\w\0\x\i\j\9\e\q\m\v\l\v\1\u\d\7\6\4\o\g\4\h\s\j\n\4\r\d\u\q\2\a\h\z\a\q\t\9\k\j\9\z\q\q\c\y\p\5\o\9\7\u\9\9\k\r\v\c\t\9\e\8\l\7\v\q\u\m\c\n\k\b\e\v\j\f\r\e\r\1\r\u\i\l\i\h\r\e\1\u\s\j\0\d\1\4\q\l\y\b\i\o\2\5\5\4\p\i\z\8\o\i\l\9\d\h\6\c\t\v\h\o\j\r\a\q\6\x\o\t\o\q\c\n\j\s\n\w\h\b\u\f\j\e\e\n\b\f\c\l\2\n\e\u\b\f\9\3\c\j\h\5\s\3\x\n\c\l\j\9\s\h\s\f\6\i\e\i\5\y\p\t\u\g\l\s\6\c\3\4\g\r\c\s\7\z\i\0\l\z\6\r\z\8\a\s\6\m\w\u\e\o\r\u\x\6\w\9\h\m\t\r\6\a\0\j\l\d\3\m\n\c\4\x\e\y\9\k\f\1\x\q\s\d\7\h\6\m\r\u\d\2\d\j\q\p\2\f\r\u\7\d\a\z\j\x\l\1\s\5\u\o\7\6\m\4\j\f\n\k\g\t\8\y\i\f\2\5\p\7\k\9\5\7\p\n\j\q\y\e\h\9\g\3\6\y\u\m\f\q\w\x\w\4\g\s\p\a\l\9\9\0\w\t\1\f\5\m\c\e\3\v\5\7\p\z\o\3\c\e\y\m\m\7\6\3\j\7\7\3\k\4\2\n\c\p\d\2\6\x\w\w\7\6\i\i\c\2\q\u\w\r\7\5\e\l\u\f\d\s\h\8\p\3\e\y\5\p\9\4\6\b\w\3\2\9\3\w\e\n\o\m\l\x\8\b\n\7\p\i\g\o\5\1\s\8\8\1\6\c\2\4\k\5\u\f\8\a\y\o\4\q\0\4\m\4\s\2\b\0\7\g\t\5\w\t\l\u\l\8\r\8\3\x\m\v\u\v\5\8\u\f\e\m\k\j\h\h\l\p\u\c\e\x\n\8\t\5\a\c\7\p\f\q\8\d\z\4\4\n\t\x\0\f\2\8\f\5\6\t\y\r\e\w\2\y\z\2\r\h\e\8\4\b\5\j\n\m\j\q\b\a\1\x\v\q\7\j\f\b\o\a\g\e\p\v\n\u\5\n\x\s\b\1\a\v\m\e\x\t\b\v\7\f\0\k\m\7\5\7\c\z\d\g\5\7\m\i\k\u\u\i\f\w\e\o\3\d\v\6\n\s\i\b\a\t\s\e\w\j\b\c\v\s\u\f\u\b\q\2\y\h\r\n\k\c\5\g\a\s\5\2\9\6\m\k\8\r\o\q\8\q\q\2\f\1\z\a\x\8\x\d\3\r\0\w\a\q\e\m\t\x\y\2\3\a\i\c\4\6\u\0\n\2\j\l\h\x\p\p\x\1\w\p\m\u\8\l\o\v\z\r\n\f\6\o\m\3\5\o\e\g\u\8\q\4\w\b\x\c\0\w\1\l\h\7\e\m\t\t\s\q\3\s\i\n\t\r\m\l\p\r\4\c\n\y\l\w\l\7\g\e\8\3\h\c\o\7\t\0\o\b\n\s\4\c\b\j\j\r\p\l\9\v\m\r\f\y\x\l\d\c\g\x\s\0\2\0\w\j\n\k\x\c\f\5\1\u\4\c\k\w\q\t\a\c\p\v\f\d\c\l\4\m\x\x\s\r\e\y\f\9\8\n\m\b\i\a\a\m\5\s\z\w\k\2\7\w\7\c\4\m\1\t\b\n\9\y\4\c\r\v\h\2\v\o\i\f\6\m\o\c\y\l\5\g\u\7\7\i\3\9\4\g\q\l\c\5\0\e\x\p\d\n\5\3\t\k\w\u\0\l\5\y\i\n\y\e\s\k\p\9\8\r\t\4\g\i\e\j\c\0\k\c\o\9\r\9\2\3\r\2\q\n\p\0\o\u\2\d\6\c\9\d\5\u\o\6\y\3\p\y\9\t\o\8\l\t\s\8\w\3\r\f\v\e\7\k\z\s\f\n\t\4\8\q\d\s\e\6\d\g\w\6\x\y\n\3\u\t\9\7\p\s\h\g\e\t\r\q\h\2\i\d\t\n\n\v\l\c\d\z\p\w\l\6\0\3\z\a\l\g\a\v\p\0\m\6\t\6\2\6\k\v\b\7\r\f\n\k\q\n\6\5\g\q\u\e\n\l\8\g\2\n\j\c\q\u\l\j\o\b\t\u\3\7\t\g\f\y\i\z\q\b\6\8\x\7\w\i\r\u\n\7\r\j\k\a\h\u\a\4\z\q\k\a\r\9\q\e\o\o\c\z\w\2\x\3\i\9\h\r\c\4\n\9\o\2\z\l\k\1\e\n\s\j\q\i\7\w\2\p\5\o\y\c\b\7\g\t\d\i\7\3\9\n\6\s\m\s\i\9\q\3\1\u\p\2\8\9\j\k\m\a\h\0\q\x\p\t\1\s\u\x\k\j\h\x\m\m\o\t\r\1\2\p\j\9\c\p\0\a\o\v\j\t\k\5\f\g\9\z\9\i\2\w\m\6\q\2\p\5\b\g\2\9\n\s\e\f\d\a\m\j\n\u\c\s\i\w\l\3\u\w\1\4\8\j\m\8\u\3\5\9\j\r\i\6\4\4\r\c\8\x\e\7\l\5\7\6\i\x\q\s\o\2\p\0\e\4\v\7\a\3\k\m\x\s\v\6\v\o\m\b\d\g\6\l\m\x\i\0\h\s\v\w\v\5\r\6\w\e\0\w\p\f\p\p\g\n\k\8\o\d\t\k\0\s\q\0\r\t\c\v\m\4\8\4\3\u\l\u\j\z\z\5\k\9\m\0\k\7\q\9\5\9\6\3\b\s\9\4\8\q\b\0\e\t\3\w\e\n\y\c\g\l\y\y\i\6\5\6\f\0\b\x\m\y\d\c\x\e\2\3\b\t\t\f\w\q\g\a\0\i\d\7\v\w\p\m\s\e\n\v\v\h\u\d\v\a\k\w\6\7\r\5\q\v\s\l\v\s\m\d\o\g\j\w\7\c\w\c\x\c\f\e\l\f\r\y\2\z\9\d\y\r\l\i\l\y\i\f\f\r\1\0\z\l\y\7\u\r\t\y\u\2\z\x\r\4\3\5\3\m\u\0\3\7\0\i\f\j\o\5\s\4\r\s\8\e\b\l\r\7\i\z\7\z\z\d\h\p\c\t\s\y\2\i\u\8\f\a\o\2\w\v\0\k\f\2\2\v\m\f\r\e\u\b\8\q\u\7\s\b\a\3\g\0\w\k\1\k\0\4\0\6\7\a\f\v\p\r\6\x\8\6\5\9\p\i\o\q\9\r\t\g\8\3\q\v\5\3\9\r\c\i\l\z\n\n\j\5\1\u\v\v\h\r\4\r\0\a\i\p\h\6\t\m\h\i\9\b\q\c\k\x\c\i\2\h\4\r\q\m\l\d\w\z\c\n\0\g\4\5\x\y\q\z\5\4\n\2\0\x\5\s\g\5\u\f\g\7\z\5\h\t\j\5\u\c\o\y\q\i\n\5\t\9\d\h\e\k\o\d\t\w\5\l\s\i\5\r\y\y\j\f\7\z\0\0\n\i\5\q\b\n\m\3\1\z\7\2\k\l\h\i\j\n\9\j\h\x\l\a\v\f\1\5\t\n\8\q\h\g\1\x\t\z\e\x\l\3\i\a\e\5\y\w\h\f\q\t\k\y\7\k\5\h\5\v\a\o\3\3\g\h\t\3\y\f\n\d\e\w\j\m\o\k\2\k\c\s\n\f\e\n\m\x\v\1\7\1\p\y\q\c\k\o\d\z\b\a\y\m\c\b\7\7\s\g\h\j\k\n\m\4\b\g\8\p\5\4\g\6\q\r\r\w\m\m\0\6\1\0\j\p\g\4\0\r\w\c\o\c\s\y\t\o\l\p\j\t\b\q\a\x\p\3\0\0\l\e\q\b\8\d\l\y\y\z\h\j\w\5\4\i\v\h\b\w\l\d\a\s\e\t\y\h\0\z\i\i\1\f\z\o\9\u\x\5\v\x\q\0\g\0\1\f\e\p\n\c\a\c\v\r\v\8\k\r\c\d\j\t\v\h\9\c\p\m\l\u\0\i\l\2\0\g\n\r\x\2\3\a\r\c\e\y\u\c\b\q\9\d\b\m\f\7\n\7\1\w\1\5\3\8\u\i\f\v\3\s\j\9\5\u\l\3\2\g\3\h\8\i\r\3\c\f\4\0\f\q\n\b\a\n\9\l\x\i\p\p\k\f\v\1\g\9\j\u\v\p\y\9\m\d\u\p\r\v\j\v\k\q\x\w\3\s\e\h\g\2\c\3\f\w\y\m\7\x\t\x\7\5\c\w\6\l\s\6\6\r\k\v\h\m\h\w\k\e\1\7\s\f\l\8\o\s\o\o\o\c\6\1\x\h\1\d\l\j\c\6\p\b\r\0\o\k\1\0 ]] 00:08:39.523 00:08:39.523 real 0m1.281s 00:08:39.523 user 0m0.852s 00:08:39.523 sys 0m0.625s 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.523 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:39.524 11:39:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.524 { 00:08:39.524 "subsystems": [ 00:08:39.524 { 00:08:39.524 "subsystem": "bdev", 00:08:39.524 "config": [ 00:08:39.524 { 00:08:39.524 "params": { 00:08:39.524 "trtype": "pcie", 00:08:39.524 "traddr": "0000:00:10.0", 00:08:39.524 "name": "Nvme0" 00:08:39.524 }, 00:08:39.524 "method": "bdev_nvme_attach_controller" 00:08:39.524 }, 00:08:39.524 { 00:08:39.524 "method": "bdev_wait_for_examine" 00:08:39.524 } 00:08:39.524 ] 00:08:39.524 } 00:08:39.524 ] 00:08:39.524 } 00:08:39.524 [2024-11-28 11:39:09.508644] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:39.524 [2024-11-28 11:39:09.508753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74147 ] 00:08:39.524 [2024-11-28 11:39:09.634555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.783 [2024-11-28 11:39:09.662110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.783 [2024-11-28 11:39:09.703812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.783 [2024-11-28 11:39:09.762019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.783  [2024-11-28T11:39:10.168Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:40.042 00:08:40.042 11:39:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.042 00:08:40.042 real 0m17.526s 00:08:40.042 user 0m12.244s 00:08:40.042 sys 0m7.054s 00:08:40.042 11:39:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.042 11:39:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:40.042 ************************************ 00:08:40.042 END TEST spdk_dd_basic_rw 00:08:40.042 ************************************ 00:08:40.042 11:39:10 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:40.042 11:39:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.042 11:39:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.042 11:39:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:40.042 ************************************ 00:08:40.042 START TEST spdk_dd_posix 00:08:40.042 ************************************ 00:08:40.042 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:40.303 * Looking for test storage... 00:08:40.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.303 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.304 --rc genhtml_branch_coverage=1 00:08:40.304 --rc genhtml_function_coverage=1 00:08:40.304 --rc genhtml_legend=1 00:08:40.304 --rc geninfo_all_blocks=1 00:08:40.304 --rc geninfo_unexecuted_blocks=1 00:08:40.304 00:08:40.304 ' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.304 --rc genhtml_branch_coverage=1 00:08:40.304 --rc genhtml_function_coverage=1 00:08:40.304 --rc genhtml_legend=1 00:08:40.304 --rc geninfo_all_blocks=1 00:08:40.304 --rc geninfo_unexecuted_blocks=1 00:08:40.304 00:08:40.304 ' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.304 --rc genhtml_branch_coverage=1 00:08:40.304 --rc genhtml_function_coverage=1 00:08:40.304 --rc genhtml_legend=1 00:08:40.304 --rc geninfo_all_blocks=1 00:08:40.304 --rc geninfo_unexecuted_blocks=1 00:08:40.304 00:08:40.304 ' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.304 --rc genhtml_branch_coverage=1 00:08:40.304 --rc genhtml_function_coverage=1 00:08:40.304 --rc genhtml_legend=1 00:08:40.304 --rc geninfo_all_blocks=1 00:08:40.304 --rc geninfo_unexecuted_blocks=1 00:08:40.304 00:08:40.304 ' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:40.304 * First test run, liburing in use 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:40.304 ************************************ 00:08:40.304 START TEST dd_flag_append 00:08:40.304 ************************************ 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=c0p1omf8f3sg0v94kc26i10s4tx5x5r7 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=bv3aqs46iv0yaq0vxlrj7inzrrql9z16 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s c0p1omf8f3sg0v94kc26i10s4tx5x5r7 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s bv3aqs46iv0yaq0vxlrj7inzrrql9z16 00:08:40.304 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:40.304 [2024-11-28 11:39:10.399790] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:40.304 [2024-11-28 11:39:10.399909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74219 ] 00:08:40.564 [2024-11-28 11:39:10.526241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.565 [2024-11-28 11:39:10.551421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.565 [2024-11-28 11:39:10.604135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.565 [2024-11-28 11:39:10.662734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.824  [2024-11-28T11:39:10.950Z] Copying: 32/32 [B] (average 31 kBps) 00:08:40.824 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ bv3aqs46iv0yaq0vxlrj7inzrrql9z16c0p1omf8f3sg0v94kc26i10s4tx5x5r7 == \b\v\3\a\q\s\4\6\i\v\0\y\a\q\0\v\x\l\r\j\7\i\n\z\r\r\q\l\9\z\1\6\c\0\p\1\o\m\f\8\f\3\s\g\0\v\9\4\k\c\2\6\i\1\0\s\4\t\x\5\x\5\r\7 ]] 00:08:40.824 00:08:40.824 real 0m0.550s 00:08:40.824 user 0m0.284s 00:08:40.824 sys 0m0.292s 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.824 ************************************ 00:08:40.824 END TEST dd_flag_append 00:08:40.824 ************************************ 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:40.824 ************************************ 00:08:40.824 START TEST dd_flag_directory 00:08:40.824 ************************************ 00:08:40.824 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.825 11:39:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.084 [2024-11-28 11:39:11.003412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:41.084 [2024-11-28 11:39:11.003539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74253 ] 00:08:41.084 [2024-11-28 11:39:11.128671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.084 [2024-11-28 11:39:11.157236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.343 [2024-11-28 11:39:11.210177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.343 [2024-11-28 11:39:11.266651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.343 [2024-11-28 11:39:11.304269] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.343 [2024-11-28 11:39:11.304351] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.343 [2024-11-28 11:39:11.304388] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.343 [2024-11-28 11:39:11.427626] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.602 11:39:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:41.602 [2024-11-28 11:39:11.544803] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:41.602 [2024-11-28 11:39:11.544914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74257 ] 00:08:41.602 [2024-11-28 11:39:11.669225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.602 [2024-11-28 11:39:11.696101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.862 [2024-11-28 11:39:11.736672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.862 [2024-11-28 11:39:11.794040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.862 [2024-11-28 11:39:11.831681] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.862 [2024-11-28 11:39:11.831744] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.862 [2024-11-28 11:39:11.831779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.862 [2024-11-28 11:39:11.957271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.120 00:08:42.120 real 0m1.077s 00:08:42.120 user 0m0.560s 00:08:42.120 sys 0m0.307s 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.120 ************************************ 00:08:42.120 END TEST dd_flag_directory 00:08:42.120 ************************************ 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 ************************************ 00:08:42.120 START TEST dd_flag_nofollow 00:08:42.120 ************************************ 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.120 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.120 [2024-11-28 11:39:12.143536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:42.120 [2024-11-28 11:39:12.143654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74291 ] 00:08:42.378 [2024-11-28 11:39:12.269875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.378 [2024-11-28 11:39:12.297662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.378 [2024-11-28 11:39:12.346511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.378 [2024-11-28 11:39:12.405629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.378 [2024-11-28 11:39:12.442071] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:42.378 [2024-11-28 11:39:12.442155] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:42.378 [2024-11-28 11:39:12.442191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.637 [2024-11-28 11:39:12.564549] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.637 11:39:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:42.637 [2024-11-28 11:39:12.704884] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:42.637 [2024-11-28 11:39:12.704989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74295 ] 00:08:42.896 [2024-11-28 11:39:12.831460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.896 [2024-11-28 11:39:12.855894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.896 [2024-11-28 11:39:12.912148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.896 [2024-11-28 11:39:12.972078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.896 [2024-11-28 11:39:13.012400] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.896 [2024-11-28 11:39:13.012495] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.896 [2024-11-28 11:39:13.012533] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.155 [2024-11-28 11:39:13.131326] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.155 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:43.156 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:43.156 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:43.156 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.156 [2024-11-28 11:39:13.267166] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:43.156 [2024-11-28 11:39:13.267274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74308 ] 00:08:43.415 [2024-11-28 11:39:13.393552] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:43.415 [2024-11-28 11:39:13.417880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.415 [2024-11-28 11:39:13.469541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.415 [2024-11-28 11:39:13.525329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.674  [2024-11-28T11:39:13.800Z] Copying: 512/512 [B] (average 500 kBps) 00:08:43.674 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xeundrcwui1pgqxb8et6rw7tei97hysqehslodzfj0nx66bw0knuu3tn1dx6og7l8f7sbdxx83cfxkt444aqy6w8dgmbms6pj1793lyktfou0tmu9r89rm20opt0uqiimn0g3t7xsnxtcssyd25e1lf7kiyalntuume8j47azoy5d8oviph1ua767clk9qzn479b3vs5ng32p9po8fgbloospe8lydnazsn9fl7fuvep6aaexsupwpjvjf5ela901r0sib1pt2kpkk1hyyew420iz0s2dq10okpbh4wxl220v70ov7ecx783pl52x6792jkz4e3hdhwaf5rfd5a4sjouu85qq8rktn0pvj28kojgtns1sfz0sd15g6sw36dxx19ccaezz47qjioiibds4ermdoebsl9djfrdl5m916u73bii932c566uxtxey4n2dd6rakkt1x0v51hp2dc37roi8k49scevuyx6xkegf308uosflcqvukhgri8y9pjd == \x\e\u\n\d\r\c\w\u\i\1\p\g\q\x\b\8\e\t\6\r\w\7\t\e\i\9\7\h\y\s\q\e\h\s\l\o\d\z\f\j\0\n\x\6\6\b\w\0\k\n\u\u\3\t\n\1\d\x\6\o\g\7\l\8\f\7\s\b\d\x\x\8\3\c\f\x\k\t\4\4\4\a\q\y\6\w\8\d\g\m\b\m\s\6\p\j\1\7\9\3\l\y\k\t\f\o\u\0\t\m\u\9\r\8\9\r\m\2\0\o\p\t\0\u\q\i\i\m\n\0\g\3\t\7\x\s\n\x\t\c\s\s\y\d\2\5\e\1\l\f\7\k\i\y\a\l\n\t\u\u\m\e\8\j\4\7\a\z\o\y\5\d\8\o\v\i\p\h\1\u\a\7\6\7\c\l\k\9\q\z\n\4\7\9\b\3\v\s\5\n\g\3\2\p\9\p\o\8\f\g\b\l\o\o\s\p\e\8\l\y\d\n\a\z\s\n\9\f\l\7\f\u\v\e\p\6\a\a\e\x\s\u\p\w\p\j\v\j\f\5\e\l\a\9\0\1\r\0\s\i\b\1\p\t\2\k\p\k\k\1\h\y\y\e\w\4\2\0\i\z\0\s\2\d\q\1\0\o\k\p\b\h\4\w\x\l\2\2\0\v\7\0\o\v\7\e\c\x\7\8\3\p\l\5\2\x\6\7\9\2\j\k\z\4\e\3\h\d\h\w\a\f\5\r\f\d\5\a\4\s\j\o\u\u\8\5\q\q\8\r\k\t\n\0\p\v\j\2\8\k\o\j\g\t\n\s\1\s\f\z\0\s\d\1\5\g\6\s\w\3\6\d\x\x\1\9\c\c\a\e\z\z\4\7\q\j\i\o\i\i\b\d\s\4\e\r\m\d\o\e\b\s\l\9\d\j\f\r\d\l\5\m\9\1\6\u\7\3\b\i\i\9\3\2\c\5\6\6\u\x\t\x\e\y\4\n\2\d\d\6\r\a\k\k\t\1\x\0\v\5\1\h\p\2\d\c\3\7\r\o\i\8\k\4\9\s\c\e\v\u\y\x\6\x\k\e\g\f\3\0\8\u\o\s\f\l\c\q\v\u\k\h\g\r\i\8\y\9\p\j\d ]] 00:08:43.674 00:08:43.674 real 0m1.663s 00:08:43.674 user 0m0.887s 00:08:43.674 sys 0m0.584s 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.674 ************************************ 00:08:43.674 END TEST dd_flag_nofollow 00:08:43.674 ************************************ 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.674 ************************************ 00:08:43.674 START TEST dd_flag_noatime 00:08:43.674 ************************************ 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:43.674 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:43.933 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.933 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732793953 00:08:43.933 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.933 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732793953 00:08:43.933 11:39:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:44.869 11:39:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.869 [2024-11-28 11:39:14.877345] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:44.869 [2024-11-28 11:39:14.877702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74345 ] 00:08:45.127 [2024-11-28 11:39:15.004355] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.127 [2024-11-28 11:39:15.036332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.127 [2024-11-28 11:39:15.088728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.127 [2024-11-28 11:39:15.147518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.127  [2024-11-28T11:39:15.510Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.384 00:08:45.384 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.384 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732793953 )) 00:08:45.384 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.384 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732793953 )) 00:08:45.384 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.384 [2024-11-28 11:39:15.442808] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:45.384 [2024-11-28 11:39:15.442909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74364 ] 00:08:45.643 [2024-11-28 11:39:15.568707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.643 [2024-11-28 11:39:15.598337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.643 [2024-11-28 11:39:15.644917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.643 [2024-11-28 11:39:15.701475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.643  [2024-11-28T11:39:16.029Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.903 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732793955 )) 00:08:45.903 00:08:45.903 real 0m2.133s 00:08:45.903 user 0m0.582s 00:08:45.903 sys 0m0.605s 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.903 ************************************ 00:08:45.903 END TEST dd_flag_noatime 00:08:45.903 ************************************ 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.903 ************************************ 00:08:45.903 START TEST dd_flags_misc 00:08:45.903 ************************************ 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.903 11:39:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:46.162 [2024-11-28 11:39:16.047864] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:46.162 [2024-11-28 11:39:16.048177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74393 ] 00:08:46.162 [2024-11-28 11:39:16.173655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.162 [2024-11-28 11:39:16.201441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.162 [2024-11-28 11:39:16.243194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.421 [2024-11-28 11:39:16.300839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.422  [2024-11-28T11:39:16.548Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.422 00:08:46.422 11:39:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abjtv4h9b55aicp4tu6i5f8jify7618sapkgmm5aikfxysmp2no0uu0k5ew52v6m2lcxcs5rwbev1f560bopwb4tlwpnyxs5vfjr8wudnj5u1g3p34px8xji3xd55148pc7gdqu51tbrz0q0np02xmx4mn1lzazkw9b60npswaw208vloabgetava9ujql0swdou61rbusi580u1u6tljyte25453ukwcxuv6gncv8d4hx6d83w574df30xjj5rgbuvsge49fqq6vi231ekkkca9hb4opq3npdef230fkzm617w8uq5m2xcbdv4bqmp7f700c7wjc6pk05k8gpxxdtbhbmzxpc2pz9uk2oi3fvrwb3o31ajzjnvi8t7bb7b0ybp6oat3voyxnqook72azdpd2v3lyel1io0e75q1iyvg6f6cqj77afqack7nss5gyzkgh8k43yp4qevou0iyige5ix9gdgmjnrc5bq4timv944pl0pxev20681ss880z == \a\b\j\t\v\4\h\9\b\5\5\a\i\c\p\4\t\u\6\i\5\f\8\j\i\f\y\7\6\1\8\s\a\p\k\g\m\m\5\a\i\k\f\x\y\s\m\p\2\n\o\0\u\u\0\k\5\e\w\5\2\v\6\m\2\l\c\x\c\s\5\r\w\b\e\v\1\f\5\6\0\b\o\p\w\b\4\t\l\w\p\n\y\x\s\5\v\f\j\r\8\w\u\d\n\j\5\u\1\g\3\p\3\4\p\x\8\x\j\i\3\x\d\5\5\1\4\8\p\c\7\g\d\q\u\5\1\t\b\r\z\0\q\0\n\p\0\2\x\m\x\4\m\n\1\l\z\a\z\k\w\9\b\6\0\n\p\s\w\a\w\2\0\8\v\l\o\a\b\g\e\t\a\v\a\9\u\j\q\l\0\s\w\d\o\u\6\1\r\b\u\s\i\5\8\0\u\1\u\6\t\l\j\y\t\e\2\5\4\5\3\u\k\w\c\x\u\v\6\g\n\c\v\8\d\4\h\x\6\d\8\3\w\5\7\4\d\f\3\0\x\j\j\5\r\g\b\u\v\s\g\e\4\9\f\q\q\6\v\i\2\3\1\e\k\k\k\c\a\9\h\b\4\o\p\q\3\n\p\d\e\f\2\3\0\f\k\z\m\6\1\7\w\8\u\q\5\m\2\x\c\b\d\v\4\b\q\m\p\7\f\7\0\0\c\7\w\j\c\6\p\k\0\5\k\8\g\p\x\x\d\t\b\h\b\m\z\x\p\c\2\p\z\9\u\k\2\o\i\3\f\v\r\w\b\3\o\3\1\a\j\z\j\n\v\i\8\t\7\b\b\7\b\0\y\b\p\6\o\a\t\3\v\o\y\x\n\q\o\o\k\7\2\a\z\d\p\d\2\v\3\l\y\e\l\1\i\o\0\e\7\5\q\1\i\y\v\g\6\f\6\c\q\j\7\7\a\f\q\a\c\k\7\n\s\s\5\g\y\z\k\g\h\8\k\4\3\y\p\4\q\e\v\o\u\0\i\y\i\g\e\5\i\x\9\g\d\g\m\j\n\r\c\5\b\q\4\t\i\m\v\9\4\4\p\l\0\p\x\e\v\2\0\6\8\1\s\s\8\8\0\z ]] 00:08:46.422 11:39:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.422 11:39:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:46.681 [2024-11-28 11:39:16.556176] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:46.681 [2024-11-28 11:39:16.556267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74402 ] 00:08:46.681 [2024-11-28 11:39:16.675454] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.681 [2024-11-28 11:39:16.700615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.681 [2024-11-28 11:39:16.742236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.681 [2024-11-28 11:39:16.796876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.940  [2024-11-28T11:39:17.066Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.941 00:08:46.941 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abjtv4h9b55aicp4tu6i5f8jify7618sapkgmm5aikfxysmp2no0uu0k5ew52v6m2lcxcs5rwbev1f560bopwb4tlwpnyxs5vfjr8wudnj5u1g3p34px8xji3xd55148pc7gdqu51tbrz0q0np02xmx4mn1lzazkw9b60npswaw208vloabgetava9ujql0swdou61rbusi580u1u6tljyte25453ukwcxuv6gncv8d4hx6d83w574df30xjj5rgbuvsge49fqq6vi231ekkkca9hb4opq3npdef230fkzm617w8uq5m2xcbdv4bqmp7f700c7wjc6pk05k8gpxxdtbhbmzxpc2pz9uk2oi3fvrwb3o31ajzjnvi8t7bb7b0ybp6oat3voyxnqook72azdpd2v3lyel1io0e75q1iyvg6f6cqj77afqack7nss5gyzkgh8k43yp4qevou0iyige5ix9gdgmjnrc5bq4timv944pl0pxev20681ss880z == \a\b\j\t\v\4\h\9\b\5\5\a\i\c\p\4\t\u\6\i\5\f\8\j\i\f\y\7\6\1\8\s\a\p\k\g\m\m\5\a\i\k\f\x\y\s\m\p\2\n\o\0\u\u\0\k\5\e\w\5\2\v\6\m\2\l\c\x\c\s\5\r\w\b\e\v\1\f\5\6\0\b\o\p\w\b\4\t\l\w\p\n\y\x\s\5\v\f\j\r\8\w\u\d\n\j\5\u\1\g\3\p\3\4\p\x\8\x\j\i\3\x\d\5\5\1\4\8\p\c\7\g\d\q\u\5\1\t\b\r\z\0\q\0\n\p\0\2\x\m\x\4\m\n\1\l\z\a\z\k\w\9\b\6\0\n\p\s\w\a\w\2\0\8\v\l\o\a\b\g\e\t\a\v\a\9\u\j\q\l\0\s\w\d\o\u\6\1\r\b\u\s\i\5\8\0\u\1\u\6\t\l\j\y\t\e\2\5\4\5\3\u\k\w\c\x\u\v\6\g\n\c\v\8\d\4\h\x\6\d\8\3\w\5\7\4\d\f\3\0\x\j\j\5\r\g\b\u\v\s\g\e\4\9\f\q\q\6\v\i\2\3\1\e\k\k\k\c\a\9\h\b\4\o\p\q\3\n\p\d\e\f\2\3\0\f\k\z\m\6\1\7\w\8\u\q\5\m\2\x\c\b\d\v\4\b\q\m\p\7\f\7\0\0\c\7\w\j\c\6\p\k\0\5\k\8\g\p\x\x\d\t\b\h\b\m\z\x\p\c\2\p\z\9\u\k\2\o\i\3\f\v\r\w\b\3\o\3\1\a\j\z\j\n\v\i\8\t\7\b\b\7\b\0\y\b\p\6\o\a\t\3\v\o\y\x\n\q\o\o\k\7\2\a\z\d\p\d\2\v\3\l\y\e\l\1\i\o\0\e\7\5\q\1\i\y\v\g\6\f\6\c\q\j\7\7\a\f\q\a\c\k\7\n\s\s\5\g\y\z\k\g\h\8\k\4\3\y\p\4\q\e\v\o\u\0\i\y\i\g\e\5\i\x\9\g\d\g\m\j\n\r\c\5\b\q\4\t\i\m\v\9\4\4\p\l\0\p\x\e\v\2\0\6\8\1\s\s\8\8\0\z ]] 00:08:46.941 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.941 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:47.201 [2024-11-28 11:39:17.089097] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:47.201 [2024-11-28 11:39:17.089431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74412 ] 00:08:47.201 [2024-11-28 11:39:17.215745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.201 [2024-11-28 11:39:17.242678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.201 [2024-11-28 11:39:17.282680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.460 [2024-11-28 11:39:17.340044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.460  [2024-11-28T11:39:17.586Z] Copying: 512/512 [B] (average 166 kBps) 00:08:47.460 00:08:47.460 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abjtv4h9b55aicp4tu6i5f8jify7618sapkgmm5aikfxysmp2no0uu0k5ew52v6m2lcxcs5rwbev1f560bopwb4tlwpnyxs5vfjr8wudnj5u1g3p34px8xji3xd55148pc7gdqu51tbrz0q0np02xmx4mn1lzazkw9b60npswaw208vloabgetava9ujql0swdou61rbusi580u1u6tljyte25453ukwcxuv6gncv8d4hx6d83w574df30xjj5rgbuvsge49fqq6vi231ekkkca9hb4opq3npdef230fkzm617w8uq5m2xcbdv4bqmp7f700c7wjc6pk05k8gpxxdtbhbmzxpc2pz9uk2oi3fvrwb3o31ajzjnvi8t7bb7b0ybp6oat3voyxnqook72azdpd2v3lyel1io0e75q1iyvg6f6cqj77afqack7nss5gyzkgh8k43yp4qevou0iyige5ix9gdgmjnrc5bq4timv944pl0pxev20681ss880z == \a\b\j\t\v\4\h\9\b\5\5\a\i\c\p\4\t\u\6\i\5\f\8\j\i\f\y\7\6\1\8\s\a\p\k\g\m\m\5\a\i\k\f\x\y\s\m\p\2\n\o\0\u\u\0\k\5\e\w\5\2\v\6\m\2\l\c\x\c\s\5\r\w\b\e\v\1\f\5\6\0\b\o\p\w\b\4\t\l\w\p\n\y\x\s\5\v\f\j\r\8\w\u\d\n\j\5\u\1\g\3\p\3\4\p\x\8\x\j\i\3\x\d\5\5\1\4\8\p\c\7\g\d\q\u\5\1\t\b\r\z\0\q\0\n\p\0\2\x\m\x\4\m\n\1\l\z\a\z\k\w\9\b\6\0\n\p\s\w\a\w\2\0\8\v\l\o\a\b\g\e\t\a\v\a\9\u\j\q\l\0\s\w\d\o\u\6\1\r\b\u\s\i\5\8\0\u\1\u\6\t\l\j\y\t\e\2\5\4\5\3\u\k\w\c\x\u\v\6\g\n\c\v\8\d\4\h\x\6\d\8\3\w\5\7\4\d\f\3\0\x\j\j\5\r\g\b\u\v\s\g\e\4\9\f\q\q\6\v\i\2\3\1\e\k\k\k\c\a\9\h\b\4\o\p\q\3\n\p\d\e\f\2\3\0\f\k\z\m\6\1\7\w\8\u\q\5\m\2\x\c\b\d\v\4\b\q\m\p\7\f\7\0\0\c\7\w\j\c\6\p\k\0\5\k\8\g\p\x\x\d\t\b\h\b\m\z\x\p\c\2\p\z\9\u\k\2\o\i\3\f\v\r\w\b\3\o\3\1\a\j\z\j\n\v\i\8\t\7\b\b\7\b\0\y\b\p\6\o\a\t\3\v\o\y\x\n\q\o\o\k\7\2\a\z\d\p\d\2\v\3\l\y\e\l\1\i\o\0\e\7\5\q\1\i\y\v\g\6\f\6\c\q\j\7\7\a\f\q\a\c\k\7\n\s\s\5\g\y\z\k\g\h\8\k\4\3\y\p\4\q\e\v\o\u\0\i\y\i\g\e\5\i\x\9\g\d\g\m\j\n\r\c\5\b\q\4\t\i\m\v\9\4\4\p\l\0\p\x\e\v\2\0\6\8\1\s\s\8\8\0\z ]] 00:08:47.460 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.460 11:39:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:47.720 [2024-11-28 11:39:17.597703] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:47.720 [2024-11-28 11:39:17.597947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74421 ] 00:08:47.720 [2024-11-28 11:39:17.717233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.720 [2024-11-28 11:39:17.741982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.720 [2024-11-28 11:39:17.782560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.720 [2024-11-28 11:39:17.837019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.980  [2024-11-28T11:39:18.106Z] Copying: 512/512 [B] (average 250 kBps) 00:08:47.980 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abjtv4h9b55aicp4tu6i5f8jify7618sapkgmm5aikfxysmp2no0uu0k5ew52v6m2lcxcs5rwbev1f560bopwb4tlwpnyxs5vfjr8wudnj5u1g3p34px8xji3xd55148pc7gdqu51tbrz0q0np02xmx4mn1lzazkw9b60npswaw208vloabgetava9ujql0swdou61rbusi580u1u6tljyte25453ukwcxuv6gncv8d4hx6d83w574df30xjj5rgbuvsge49fqq6vi231ekkkca9hb4opq3npdef230fkzm617w8uq5m2xcbdv4bqmp7f700c7wjc6pk05k8gpxxdtbhbmzxpc2pz9uk2oi3fvrwb3o31ajzjnvi8t7bb7b0ybp6oat3voyxnqook72azdpd2v3lyel1io0e75q1iyvg6f6cqj77afqack7nss5gyzkgh8k43yp4qevou0iyige5ix9gdgmjnrc5bq4timv944pl0pxev20681ss880z == \a\b\j\t\v\4\h\9\b\5\5\a\i\c\p\4\t\u\6\i\5\f\8\j\i\f\y\7\6\1\8\s\a\p\k\g\m\m\5\a\i\k\f\x\y\s\m\p\2\n\o\0\u\u\0\k\5\e\w\5\2\v\6\m\2\l\c\x\c\s\5\r\w\b\e\v\1\f\5\6\0\b\o\p\w\b\4\t\l\w\p\n\y\x\s\5\v\f\j\r\8\w\u\d\n\j\5\u\1\g\3\p\3\4\p\x\8\x\j\i\3\x\d\5\5\1\4\8\p\c\7\g\d\q\u\5\1\t\b\r\z\0\q\0\n\p\0\2\x\m\x\4\m\n\1\l\z\a\z\k\w\9\b\6\0\n\p\s\w\a\w\2\0\8\v\l\o\a\b\g\e\t\a\v\a\9\u\j\q\l\0\s\w\d\o\u\6\1\r\b\u\s\i\5\8\0\u\1\u\6\t\l\j\y\t\e\2\5\4\5\3\u\k\w\c\x\u\v\6\g\n\c\v\8\d\4\h\x\6\d\8\3\w\5\7\4\d\f\3\0\x\j\j\5\r\g\b\u\v\s\g\e\4\9\f\q\q\6\v\i\2\3\1\e\k\k\k\c\a\9\h\b\4\o\p\q\3\n\p\d\e\f\2\3\0\f\k\z\m\6\1\7\w\8\u\q\5\m\2\x\c\b\d\v\4\b\q\m\p\7\f\7\0\0\c\7\w\j\c\6\p\k\0\5\k\8\g\p\x\x\d\t\b\h\b\m\z\x\p\c\2\p\z\9\u\k\2\o\i\3\f\v\r\w\b\3\o\3\1\a\j\z\j\n\v\i\8\t\7\b\b\7\b\0\y\b\p\6\o\a\t\3\v\o\y\x\n\q\o\o\k\7\2\a\z\d\p\d\2\v\3\l\y\e\l\1\i\o\0\e\7\5\q\1\i\y\v\g\6\f\6\c\q\j\7\7\a\f\q\a\c\k\7\n\s\s\5\g\y\z\k\g\h\8\k\4\3\y\p\4\q\e\v\o\u\0\i\y\i\g\e\5\i\x\9\g\d\g\m\j\n\r\c\5\b\q\4\t\i\m\v\9\4\4\p\l\0\p\x\e\v\2\0\6\8\1\s\s\8\8\0\z ]] 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.980 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:48.240 [2024-11-28 11:39:18.123250] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:48.240 [2024-11-28 11:39:18.123549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74431 ] 00:08:48.240 [2024-11-28 11:39:18.251376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.240 [2024-11-28 11:39:18.277781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.240 [2024-11-28 11:39:18.316278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.499 [2024-11-28 11:39:18.376849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.499  [2024-11-28T11:39:18.625Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.499 00:08:48.499 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ha2wba6bnss31sp1luuvighszhq2qm6cnye72pkchm2zmvbrp38q4mt4zs4xsnojbcvnvt4tcxsd60jdlvaejhrosq30nyn1i4lt0gzklviz0vx3ilz6rr7ril376xz4rb4q7cwhf7dnbjfadqd19eri3r1xi4rd9zy8l9v6o2be1813e44d71nwf4263phjngg1azpwzzhjazyr2w54py56u5j00ywc7s4swc5lqxrg30xteoxcf90wq7xsydic55p2v6obaxwao346vuivh7bs0f0ykycy1rcndhj4s4j91jbmjsqfdg2ihi3i442bd4o8p7dyddbg9146h53kb0z36k6n7jieynlxqzkjbraguqx4kdcnwlbphrzca7tt7k6ica9gfny09c8fhs49dl1whu11agmflap03ya2q253tvcizh0uebeh0mnzfqqhl8di4chaeuhuof6v8benb88aoofazmw3n5yk9c8vdubqdevemh0q9xsw8gmrv8rh == \h\a\2\w\b\a\6\b\n\s\s\3\1\s\p\1\l\u\u\v\i\g\h\s\z\h\q\2\q\m\6\c\n\y\e\7\2\p\k\c\h\m\2\z\m\v\b\r\p\3\8\q\4\m\t\4\z\s\4\x\s\n\o\j\b\c\v\n\v\t\4\t\c\x\s\d\6\0\j\d\l\v\a\e\j\h\r\o\s\q\3\0\n\y\n\1\i\4\l\t\0\g\z\k\l\v\i\z\0\v\x\3\i\l\z\6\r\r\7\r\i\l\3\7\6\x\z\4\r\b\4\q\7\c\w\h\f\7\d\n\b\j\f\a\d\q\d\1\9\e\r\i\3\r\1\x\i\4\r\d\9\z\y\8\l\9\v\6\o\2\b\e\1\8\1\3\e\4\4\d\7\1\n\w\f\4\2\6\3\p\h\j\n\g\g\1\a\z\p\w\z\z\h\j\a\z\y\r\2\w\5\4\p\y\5\6\u\5\j\0\0\y\w\c\7\s\4\s\w\c\5\l\q\x\r\g\3\0\x\t\e\o\x\c\f\9\0\w\q\7\x\s\y\d\i\c\5\5\p\2\v\6\o\b\a\x\w\a\o\3\4\6\v\u\i\v\h\7\b\s\0\f\0\y\k\y\c\y\1\r\c\n\d\h\j\4\s\4\j\9\1\j\b\m\j\s\q\f\d\g\2\i\h\i\3\i\4\4\2\b\d\4\o\8\p\7\d\y\d\d\b\g\9\1\4\6\h\5\3\k\b\0\z\3\6\k\6\n\7\j\i\e\y\n\l\x\q\z\k\j\b\r\a\g\u\q\x\4\k\d\c\n\w\l\b\p\h\r\z\c\a\7\t\t\7\k\6\i\c\a\9\g\f\n\y\0\9\c\8\f\h\s\4\9\d\l\1\w\h\u\1\1\a\g\m\f\l\a\p\0\3\y\a\2\q\2\5\3\t\v\c\i\z\h\0\u\e\b\e\h\0\m\n\z\f\q\q\h\l\8\d\i\4\c\h\a\e\u\h\u\o\f\6\v\8\b\e\n\b\8\8\a\o\o\f\a\z\m\w\3\n\5\y\k\9\c\8\v\d\u\b\q\d\e\v\e\m\h\0\q\9\x\s\w\8\g\m\r\v\8\r\h ]] 00:08:48.499 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.499 11:39:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:48.759 [2024-11-28 11:39:18.657032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:48.759 [2024-11-28 11:39:18.657151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74440 ] 00:08:48.759 [2024-11-28 11:39:18.783798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.759 [2024-11-28 11:39:18.809703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.759 [2024-11-28 11:39:18.855074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.018 [2024-11-28 11:39:18.910030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.018  [2024-11-28T11:39:19.144Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.018 00:08:49.019 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ha2wba6bnss31sp1luuvighszhq2qm6cnye72pkchm2zmvbrp38q4mt4zs4xsnojbcvnvt4tcxsd60jdlvaejhrosq30nyn1i4lt0gzklviz0vx3ilz6rr7ril376xz4rb4q7cwhf7dnbjfadqd19eri3r1xi4rd9zy8l9v6o2be1813e44d71nwf4263phjngg1azpwzzhjazyr2w54py56u5j00ywc7s4swc5lqxrg30xteoxcf90wq7xsydic55p2v6obaxwao346vuivh7bs0f0ykycy1rcndhj4s4j91jbmjsqfdg2ihi3i442bd4o8p7dyddbg9146h53kb0z36k6n7jieynlxqzkjbraguqx4kdcnwlbphrzca7tt7k6ica9gfny09c8fhs49dl1whu11agmflap03ya2q253tvcizh0uebeh0mnzfqqhl8di4chaeuhuof6v8benb88aoofazmw3n5yk9c8vdubqdevemh0q9xsw8gmrv8rh == \h\a\2\w\b\a\6\b\n\s\s\3\1\s\p\1\l\u\u\v\i\g\h\s\z\h\q\2\q\m\6\c\n\y\e\7\2\p\k\c\h\m\2\z\m\v\b\r\p\3\8\q\4\m\t\4\z\s\4\x\s\n\o\j\b\c\v\n\v\t\4\t\c\x\s\d\6\0\j\d\l\v\a\e\j\h\r\o\s\q\3\0\n\y\n\1\i\4\l\t\0\g\z\k\l\v\i\z\0\v\x\3\i\l\z\6\r\r\7\r\i\l\3\7\6\x\z\4\r\b\4\q\7\c\w\h\f\7\d\n\b\j\f\a\d\q\d\1\9\e\r\i\3\r\1\x\i\4\r\d\9\z\y\8\l\9\v\6\o\2\b\e\1\8\1\3\e\4\4\d\7\1\n\w\f\4\2\6\3\p\h\j\n\g\g\1\a\z\p\w\z\z\h\j\a\z\y\r\2\w\5\4\p\y\5\6\u\5\j\0\0\y\w\c\7\s\4\s\w\c\5\l\q\x\r\g\3\0\x\t\e\o\x\c\f\9\0\w\q\7\x\s\y\d\i\c\5\5\p\2\v\6\o\b\a\x\w\a\o\3\4\6\v\u\i\v\h\7\b\s\0\f\0\y\k\y\c\y\1\r\c\n\d\h\j\4\s\4\j\9\1\j\b\m\j\s\q\f\d\g\2\i\h\i\3\i\4\4\2\b\d\4\o\8\p\7\d\y\d\d\b\g\9\1\4\6\h\5\3\k\b\0\z\3\6\k\6\n\7\j\i\e\y\n\l\x\q\z\k\j\b\r\a\g\u\q\x\4\k\d\c\n\w\l\b\p\h\r\z\c\a\7\t\t\7\k\6\i\c\a\9\g\f\n\y\0\9\c\8\f\h\s\4\9\d\l\1\w\h\u\1\1\a\g\m\f\l\a\p\0\3\y\a\2\q\2\5\3\t\v\c\i\z\h\0\u\e\b\e\h\0\m\n\z\f\q\q\h\l\8\d\i\4\c\h\a\e\u\h\u\o\f\6\v\8\b\e\n\b\8\8\a\o\o\f\a\z\m\w\3\n\5\y\k\9\c\8\v\d\u\b\q\d\e\v\e\m\h\0\q\9\x\s\w\8\g\m\r\v\8\r\h ]] 00:08:49.019 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.019 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:49.278 [2024-11-28 11:39:19.183351] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:49.278 [2024-11-28 11:39:19.183498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74450 ] 00:08:49.278 [2024-11-28 11:39:19.305563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.278 [2024-11-28 11:39:19.329975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.278 [2024-11-28 11:39:19.374660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.538 [2024-11-28 11:39:19.433248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.538  [2024-11-28T11:39:19.664Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.538 00:08:49.538 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ha2wba6bnss31sp1luuvighszhq2qm6cnye72pkchm2zmvbrp38q4mt4zs4xsnojbcvnvt4tcxsd60jdlvaejhrosq30nyn1i4lt0gzklviz0vx3ilz6rr7ril376xz4rb4q7cwhf7dnbjfadqd19eri3r1xi4rd9zy8l9v6o2be1813e44d71nwf4263phjngg1azpwzzhjazyr2w54py56u5j00ywc7s4swc5lqxrg30xteoxcf90wq7xsydic55p2v6obaxwao346vuivh7bs0f0ykycy1rcndhj4s4j91jbmjsqfdg2ihi3i442bd4o8p7dyddbg9146h53kb0z36k6n7jieynlxqzkjbraguqx4kdcnwlbphrzca7tt7k6ica9gfny09c8fhs49dl1whu11agmflap03ya2q253tvcizh0uebeh0mnzfqqhl8di4chaeuhuof6v8benb88aoofazmw3n5yk9c8vdubqdevemh0q9xsw8gmrv8rh == \h\a\2\w\b\a\6\b\n\s\s\3\1\s\p\1\l\u\u\v\i\g\h\s\z\h\q\2\q\m\6\c\n\y\e\7\2\p\k\c\h\m\2\z\m\v\b\r\p\3\8\q\4\m\t\4\z\s\4\x\s\n\o\j\b\c\v\n\v\t\4\t\c\x\s\d\6\0\j\d\l\v\a\e\j\h\r\o\s\q\3\0\n\y\n\1\i\4\l\t\0\g\z\k\l\v\i\z\0\v\x\3\i\l\z\6\r\r\7\r\i\l\3\7\6\x\z\4\r\b\4\q\7\c\w\h\f\7\d\n\b\j\f\a\d\q\d\1\9\e\r\i\3\r\1\x\i\4\r\d\9\z\y\8\l\9\v\6\o\2\b\e\1\8\1\3\e\4\4\d\7\1\n\w\f\4\2\6\3\p\h\j\n\g\g\1\a\z\p\w\z\z\h\j\a\z\y\r\2\w\5\4\p\y\5\6\u\5\j\0\0\y\w\c\7\s\4\s\w\c\5\l\q\x\r\g\3\0\x\t\e\o\x\c\f\9\0\w\q\7\x\s\y\d\i\c\5\5\p\2\v\6\o\b\a\x\w\a\o\3\4\6\v\u\i\v\h\7\b\s\0\f\0\y\k\y\c\y\1\r\c\n\d\h\j\4\s\4\j\9\1\j\b\m\j\s\q\f\d\g\2\i\h\i\3\i\4\4\2\b\d\4\o\8\p\7\d\y\d\d\b\g\9\1\4\6\h\5\3\k\b\0\z\3\6\k\6\n\7\j\i\e\y\n\l\x\q\z\k\j\b\r\a\g\u\q\x\4\k\d\c\n\w\l\b\p\h\r\z\c\a\7\t\t\7\k\6\i\c\a\9\g\f\n\y\0\9\c\8\f\h\s\4\9\d\l\1\w\h\u\1\1\a\g\m\f\l\a\p\0\3\y\a\2\q\2\5\3\t\v\c\i\z\h\0\u\e\b\e\h\0\m\n\z\f\q\q\h\l\8\d\i\4\c\h\a\e\u\h\u\o\f\6\v\8\b\e\n\b\8\8\a\o\o\f\a\z\m\w\3\n\5\y\k\9\c\8\v\d\u\b\q\d\e\v\e\m\h\0\q\9\x\s\w\8\g\m\r\v\8\r\h ]] 00:08:49.538 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.538 11:39:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:49.798 [2024-11-28 11:39:19.707345] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:49.798 [2024-11-28 11:39:19.707655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74459 ] 00:08:49.798 [2024-11-28 11:39:19.834020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.798 [2024-11-28 11:39:19.861061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.798 [2024-11-28 11:39:19.911416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.056 [2024-11-28 11:39:19.967221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.056  [2024-11-28T11:39:20.182Z] Copying: 512/512 [B] (average 250 kBps) 00:08:50.056 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ha2wba6bnss31sp1luuvighszhq2qm6cnye72pkchm2zmvbrp38q4mt4zs4xsnojbcvnvt4tcxsd60jdlvaejhrosq30nyn1i4lt0gzklviz0vx3ilz6rr7ril376xz4rb4q7cwhf7dnbjfadqd19eri3r1xi4rd9zy8l9v6o2be1813e44d71nwf4263phjngg1azpwzzhjazyr2w54py56u5j00ywc7s4swc5lqxrg30xteoxcf90wq7xsydic55p2v6obaxwao346vuivh7bs0f0ykycy1rcndhj4s4j91jbmjsqfdg2ihi3i442bd4o8p7dyddbg9146h53kb0z36k6n7jieynlxqzkjbraguqx4kdcnwlbphrzca7tt7k6ica9gfny09c8fhs49dl1whu11agmflap03ya2q253tvcizh0uebeh0mnzfqqhl8di4chaeuhuof6v8benb88aoofazmw3n5yk9c8vdubqdevemh0q9xsw8gmrv8rh == \h\a\2\w\b\a\6\b\n\s\s\3\1\s\p\1\l\u\u\v\i\g\h\s\z\h\q\2\q\m\6\c\n\y\e\7\2\p\k\c\h\m\2\z\m\v\b\r\p\3\8\q\4\m\t\4\z\s\4\x\s\n\o\j\b\c\v\n\v\t\4\t\c\x\s\d\6\0\j\d\l\v\a\e\j\h\r\o\s\q\3\0\n\y\n\1\i\4\l\t\0\g\z\k\l\v\i\z\0\v\x\3\i\l\z\6\r\r\7\r\i\l\3\7\6\x\z\4\r\b\4\q\7\c\w\h\f\7\d\n\b\j\f\a\d\q\d\1\9\e\r\i\3\r\1\x\i\4\r\d\9\z\y\8\l\9\v\6\o\2\b\e\1\8\1\3\e\4\4\d\7\1\n\w\f\4\2\6\3\p\h\j\n\g\g\1\a\z\p\w\z\z\h\j\a\z\y\r\2\w\5\4\p\y\5\6\u\5\j\0\0\y\w\c\7\s\4\s\w\c\5\l\q\x\r\g\3\0\x\t\e\o\x\c\f\9\0\w\q\7\x\s\y\d\i\c\5\5\p\2\v\6\o\b\a\x\w\a\o\3\4\6\v\u\i\v\h\7\b\s\0\f\0\y\k\y\c\y\1\r\c\n\d\h\j\4\s\4\j\9\1\j\b\m\j\s\q\f\d\g\2\i\h\i\3\i\4\4\2\b\d\4\o\8\p\7\d\y\d\d\b\g\9\1\4\6\h\5\3\k\b\0\z\3\6\k\6\n\7\j\i\e\y\n\l\x\q\z\k\j\b\r\a\g\u\q\x\4\k\d\c\n\w\l\b\p\h\r\z\c\a\7\t\t\7\k\6\i\c\a\9\g\f\n\y\0\9\c\8\f\h\s\4\9\d\l\1\w\h\u\1\1\a\g\m\f\l\a\p\0\3\y\a\2\q\2\5\3\t\v\c\i\z\h\0\u\e\b\e\h\0\m\n\z\f\q\q\h\l\8\d\i\4\c\h\a\e\u\h\u\o\f\6\v\8\b\e\n\b\8\8\a\o\o\f\a\z\m\w\3\n\5\y\k\9\c\8\v\d\u\b\q\d\e\v\e\m\h\0\q\9\x\s\w\8\g\m\r\v\8\r\h ]] 00:08:50.316 00:08:50.316 real 0m4.209s 00:08:50.316 user 0m2.165s 00:08:50.316 sys 0m2.279s 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.316 ************************************ 00:08:50.316 END TEST dd_flags_misc 00:08:50.316 ************************************ 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:50.316 * Second test run, disabling liburing, forcing AIO 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:50.316 ************************************ 00:08:50.316 START TEST dd_flag_append_forced_aio 00:08:50.316 ************************************ 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=unb9wolym3rk6w376dplbjh1134xxr07 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=pgfjoj9zbrslsqnvkal74l36888xg7l6 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s unb9wolym3rk6w376dplbjh1134xxr07 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s pgfjoj9zbrslsqnvkal74l36888xg7l6 00:08:50.316 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:50.316 [2024-11-28 11:39:20.307036] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:50.316 [2024-11-28 11:39:20.307136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74492 ] 00:08:50.316 [2024-11-28 11:39:20.434500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.577 [2024-11-28 11:39:20.463070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.577 [2024-11-28 11:39:20.512852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.577 [2024-11-28 11:39:20.567077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.577  [2024-11-28T11:39:20.962Z] Copying: 32/32 [B] (average 31 kBps) 00:08:50.836 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ pgfjoj9zbrslsqnvkal74l36888xg7l6unb9wolym3rk6w376dplbjh1134xxr07 == \p\g\f\j\o\j\9\z\b\r\s\l\s\q\n\v\k\a\l\7\4\l\3\6\8\8\8\x\g\7\l\6\u\n\b\9\w\o\l\y\m\3\r\k\6\w\3\7\6\d\p\l\b\j\h\1\1\3\4\x\x\r\0\7 ]] 00:08:50.836 00:08:50.836 real 0m0.557s 00:08:50.836 user 0m0.290s 00:08:50.836 sys 0m0.146s 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.836 ************************************ 00:08:50.836 END TEST dd_flag_append_forced_aio 00:08:50.836 ************************************ 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 ************************************ 00:08:50.836 START TEST dd_flag_directory_forced_aio 00:08:50.836 ************************************ 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.836 11:39:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.836 [2024-11-28 11:39:20.916819] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:50.836 [2024-11-28 11:39:20.916928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 00:08:51.095 [2024-11-28 11:39:21.043001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.095 [2024-11-28 11:39:21.066372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.095 [2024-11-28 11:39:21.116743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.095 [2024-11-28 11:39:21.172925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.095 [2024-11-28 11:39:21.209406] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.095 [2024-11-28 11:39:21.209458] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.095 [2024-11-28 11:39:21.209492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.354 [2024-11-28 11:39:21.333276] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:51.354 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:51.354 [2024-11-28 11:39:21.468670] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:51.354 [2024-11-28 11:39:21.469051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74529 ] 00:08:51.613 [2024-11-28 11:39:21.598350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.613 [2024-11-28 11:39:21.631331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.613 [2024-11-28 11:39:21.687923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.873 [2024-11-28 11:39:21.746916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.873 [2024-11-28 11:39:21.784625] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.873 [2024-11-28 11:39:21.784687] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.873 [2024-11-28 11:39:21.784725] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.873 [2024-11-28 11:39:21.903960] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:51.873 ************************************ 00:08:51.873 END TEST dd_flag_directory_forced_aio 00:08:51.873 ************************************ 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.873 00:08:51.873 real 0m1.107s 00:08:51.873 user 0m0.592s 00:08:51.873 sys 0m0.302s 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.873 11:39:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:52.133 ************************************ 00:08:52.133 START TEST dd_flag_nofollow_forced_aio 00:08:52.133 ************************************ 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.133 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.133 [2024-11-28 11:39:22.086057] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:52.133 [2024-11-28 11:39:22.086524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74552 ] 00:08:52.133 [2024-11-28 11:39:22.213369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:52.133 [2024-11-28 11:39:22.241948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.392 [2024-11-28 11:39:22.298423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.392 [2024-11-28 11:39:22.356066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.392 [2024-11-28 11:39:22.392715] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.392 [2024-11-28 11:39:22.392765] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.392 [2024-11-28 11:39:22.392801] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.392 [2024-11-28 11:39:22.511194] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.652 11:39:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.652 [2024-11-28 11:39:22.632036] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:52.652 [2024-11-28 11:39:22.632146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74567 ] 00:08:52.652 [2024-11-28 11:39:22.757425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:52.911 [2024-11-28 11:39:22.785075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.911 [2024-11-28 11:39:22.832782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.911 [2024-11-28 11:39:22.886996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.911 [2024-11-28 11:39:22.923559] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:52.911 [2024-11-28 11:39:22.923609] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:52.911 [2024-11-28 11:39:22.923644] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.170 [2024-11-28 11:39:23.040542] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.170 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.170 [2024-11-28 11:39:23.163509] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:53.170 [2024-11-28 11:39:23.163631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74575 ] 00:08:53.170 [2024-11-28 11:39:23.290619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.430 [2024-11-28 11:39:23.317751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.430 [2024-11-28 11:39:23.364397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.430 [2024-11-28 11:39:23.419083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.430  [2024-11-28T11:39:23.828Z] Copying: 512/512 [B] (average 500 kBps) 00:08:53.702 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 2n3wlqe12a4wb1o9q9ptmqzrxvgifyppkeonpwe7cozrp7n48dnm0abztlxtz0u5dns01jiov83a3vt30lto5ptac3cjo3f6i500l9s3lfv3juqmujrcxrq2ehnow3w6nb2rwz98ur6nujv8mahc1h2zbf2x1ir7esczvij0zzwj0onkrs9d7i0s8jxmofg4e4i3o2hk71wzsi1y6sh8wwcdpdt1xvc65ucio75547yoopq9lav1mpqjwpqlwyddbigkmpvl1z833kbrxquol1xsuja7va78eao811t85q0twu4ztufcfhfr7npfbkq8uwmqlf18lill13m031oxgewe5d7zd4bf1y42d7q47uviphgeqpjgxawliuphs3ccel1nrb5gwjc7yavy1uaorqecuuj0y83st75rh5jzyq3ijftxju00b3jwnvn1tgtwqxpf04r0yplbkyu0zj8m9az35hyik6l940u37nzr2rvf27dna8v1ed4aex6dykt9 == \2\n\3\w\l\q\e\1\2\a\4\w\b\1\o\9\q\9\p\t\m\q\z\r\x\v\g\i\f\y\p\p\k\e\o\n\p\w\e\7\c\o\z\r\p\7\n\4\8\d\n\m\0\a\b\z\t\l\x\t\z\0\u\5\d\n\s\0\1\j\i\o\v\8\3\a\3\v\t\3\0\l\t\o\5\p\t\a\c\3\c\j\o\3\f\6\i\5\0\0\l\9\s\3\l\f\v\3\j\u\q\m\u\j\r\c\x\r\q\2\e\h\n\o\w\3\w\6\n\b\2\r\w\z\9\8\u\r\6\n\u\j\v\8\m\a\h\c\1\h\2\z\b\f\2\x\1\i\r\7\e\s\c\z\v\i\j\0\z\z\w\j\0\o\n\k\r\s\9\d\7\i\0\s\8\j\x\m\o\f\g\4\e\4\i\3\o\2\h\k\7\1\w\z\s\i\1\y\6\s\h\8\w\w\c\d\p\d\t\1\x\v\c\6\5\u\c\i\o\7\5\5\4\7\y\o\o\p\q\9\l\a\v\1\m\p\q\j\w\p\q\l\w\y\d\d\b\i\g\k\m\p\v\l\1\z\8\3\3\k\b\r\x\q\u\o\l\1\x\s\u\j\a\7\v\a\7\8\e\a\o\8\1\1\t\8\5\q\0\t\w\u\4\z\t\u\f\c\f\h\f\r\7\n\p\f\b\k\q\8\u\w\m\q\l\f\1\8\l\i\l\l\1\3\m\0\3\1\o\x\g\e\w\e\5\d\7\z\d\4\b\f\1\y\4\2\d\7\q\4\7\u\v\i\p\h\g\e\q\p\j\g\x\a\w\l\i\u\p\h\s\3\c\c\e\l\1\n\r\b\5\g\w\j\c\7\y\a\v\y\1\u\a\o\r\q\e\c\u\u\j\0\y\8\3\s\t\7\5\r\h\5\j\z\y\q\3\i\j\f\t\x\j\u\0\0\b\3\j\w\n\v\n\1\t\g\t\w\q\x\p\f\0\4\r\0\y\p\l\b\k\y\u\0\z\j\8\m\9\a\z\3\5\h\y\i\k\6\l\9\4\0\u\3\7\n\z\r\2\r\v\f\2\7\d\n\a\8\v\1\e\d\4\a\e\x\6\d\y\k\t\9 ]] 00:08:53.702 00:08:53.702 real 0m1.642s 00:08:53.702 user 0m0.861s 00:08:53.702 sys 0m0.446s 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.702 ************************************ 00:08:53.702 END TEST dd_flag_nofollow_forced_aio 00:08:53.702 ************************************ 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.702 11:39:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:53.702 ************************************ 00:08:53.702 START TEST dd_flag_noatime_forced_aio 00:08:53.702 ************************************ 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732793963 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732793963 00:08:53.703 11:39:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:54.672 11:39:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.672 [2024-11-28 11:39:24.793066] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:54.672 [2024-11-28 11:39:24.793191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74615 ] 00:08:54.932 [2024-11-28 11:39:24.920095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.932 [2024-11-28 11:39:24.950522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.932 [2024-11-28 11:39:24.990580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.932 [2024-11-28 11:39:25.051179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.189  [2024-11-28T11:39:25.315Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.189 00:08:55.189 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.189 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732793963 )) 00:08:55.189 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.189 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732793963 )) 00:08:55.189 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.447 [2024-11-28 11:39:25.366998] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:55.447 [2024-11-28 11:39:25.367128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74627 ] 00:08:55.447 [2024-11-28 11:39:25.492780] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.447 [2024-11-28 11:39:25.520212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.447 [2024-11-28 11:39:25.565317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.706 [2024-11-28 11:39:25.622472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.706  [2024-11-28T11:39:26.091Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.965 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732793965 )) 00:08:55.965 00:08:55.965 real 0m2.152s 00:08:55.965 user 0m0.588s 00:08:55.965 sys 0m0.324s 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 ************************************ 00:08:55.965 END TEST dd_flag_noatime_forced_aio 00:08:55.965 ************************************ 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 ************************************ 00:08:55.965 START TEST dd_flags_misc_forced_aio 00:08:55.965 ************************************ 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:55.965 11:39:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:55.965 [2024-11-28 11:39:25.987697] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:55.965 [2024-11-28 11:39:25.988095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74653 ] 00:08:56.225 [2024-11-28 11:39:26.115164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:56.226 [2024-11-28 11:39:26.142106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.226 [2024-11-28 11:39:26.196333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.226 [2024-11-28 11:39:26.254995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.226  [2024-11-28T11:39:26.611Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.485 00:08:56.485 11:39:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gyz92eqmhy084n9s7wcwvwcxmvqibftis8k2mmuhidfsnialnt7og7pknka8ys94oij2y1614t602z5jv3ntz64lg67cmibl2q3pkfijincedpmj7f3zidj50tl0y1sx1xwyswiln0p2ogtvr2tn2k2aqoavtfni9al3sdvc62tgtewmai7z6ajgmsk1pxm415n8n107h31yxdcvbumppmh1mjaz75r6sy3xx83xkei9zz6gi0h3aeupf2yypzoowrltes7ngkqdfrcmgsd96xica97n2s5guoyybvumlqdqo6xjfa1ir08nrswd6dd1fx6p3piepattupvqtuihqiv666rpajenzrled72gk0pdpgpe4dki6ot3v00zq0l44ovy7g5ct1j51ul0wz2qu0wlbmssnzfjhdn0y4hk0nv5gwm3bq5hkjxcvy8e2zow328xvgyowdu58egnhealwdmcii57yv1mrpfri3vc3o8f3kwenjywt584m9aefliy == \g\y\z\9\2\e\q\m\h\y\0\8\4\n\9\s\7\w\c\w\v\w\c\x\m\v\q\i\b\f\t\i\s\8\k\2\m\m\u\h\i\d\f\s\n\i\a\l\n\t\7\o\g\7\p\k\n\k\a\8\y\s\9\4\o\i\j\2\y\1\6\1\4\t\6\0\2\z\5\j\v\3\n\t\z\6\4\l\g\6\7\c\m\i\b\l\2\q\3\p\k\f\i\j\i\n\c\e\d\p\m\j\7\f\3\z\i\d\j\5\0\t\l\0\y\1\s\x\1\x\w\y\s\w\i\l\n\0\p\2\o\g\t\v\r\2\t\n\2\k\2\a\q\o\a\v\t\f\n\i\9\a\l\3\s\d\v\c\6\2\t\g\t\e\w\m\a\i\7\z\6\a\j\g\m\s\k\1\p\x\m\4\1\5\n\8\n\1\0\7\h\3\1\y\x\d\c\v\b\u\m\p\p\m\h\1\m\j\a\z\7\5\r\6\s\y\3\x\x\8\3\x\k\e\i\9\z\z\6\g\i\0\h\3\a\e\u\p\f\2\y\y\p\z\o\o\w\r\l\t\e\s\7\n\g\k\q\d\f\r\c\m\g\s\d\9\6\x\i\c\a\9\7\n\2\s\5\g\u\o\y\y\b\v\u\m\l\q\d\q\o\6\x\j\f\a\1\i\r\0\8\n\r\s\w\d\6\d\d\1\f\x\6\p\3\p\i\e\p\a\t\t\u\p\v\q\t\u\i\h\q\i\v\6\6\6\r\p\a\j\e\n\z\r\l\e\d\7\2\g\k\0\p\d\p\g\p\e\4\d\k\i\6\o\t\3\v\0\0\z\q\0\l\4\4\o\v\y\7\g\5\c\t\1\j\5\1\u\l\0\w\z\2\q\u\0\w\l\b\m\s\s\n\z\f\j\h\d\n\0\y\4\h\k\0\n\v\5\g\w\m\3\b\q\5\h\k\j\x\c\v\y\8\e\2\z\o\w\3\2\8\x\v\g\y\o\w\d\u\5\8\e\g\n\h\e\a\l\w\d\m\c\i\i\5\7\y\v\1\m\r\p\f\r\i\3\v\c\3\o\8\f\3\k\w\e\n\j\y\w\t\5\8\4\m\9\a\e\f\l\i\y ]] 00:08:56.485 11:39:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:56.485 11:39:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:56.485 [2024-11-28 11:39:26.552558] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:56.486 [2024-11-28 11:39:26.552658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74661 ] 00:08:56.745 [2024-11-28 11:39:26.678672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:56.745 [2024-11-28 11:39:26.707141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.745 [2024-11-28 11:39:26.748207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.745 [2024-11-28 11:39:26.800917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.745  [2024-11-28T11:39:27.131Z] Copying: 512/512 [B] (average 500 kBps) 00:08:57.005 00:08:57.005 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gyz92eqmhy084n9s7wcwvwcxmvqibftis8k2mmuhidfsnialnt7og7pknka8ys94oij2y1614t602z5jv3ntz64lg67cmibl2q3pkfijincedpmj7f3zidj50tl0y1sx1xwyswiln0p2ogtvr2tn2k2aqoavtfni9al3sdvc62tgtewmai7z6ajgmsk1pxm415n8n107h31yxdcvbumppmh1mjaz75r6sy3xx83xkei9zz6gi0h3aeupf2yypzoowrltes7ngkqdfrcmgsd96xica97n2s5guoyybvumlqdqo6xjfa1ir08nrswd6dd1fx6p3piepattupvqtuihqiv666rpajenzrled72gk0pdpgpe4dki6ot3v00zq0l44ovy7g5ct1j51ul0wz2qu0wlbmssnzfjhdn0y4hk0nv5gwm3bq5hkjxcvy8e2zow328xvgyowdu58egnhealwdmcii57yv1mrpfri3vc3o8f3kwenjywt584m9aefliy == \g\y\z\9\2\e\q\m\h\y\0\8\4\n\9\s\7\w\c\w\v\w\c\x\m\v\q\i\b\f\t\i\s\8\k\2\m\m\u\h\i\d\f\s\n\i\a\l\n\t\7\o\g\7\p\k\n\k\a\8\y\s\9\4\o\i\j\2\y\1\6\1\4\t\6\0\2\z\5\j\v\3\n\t\z\6\4\l\g\6\7\c\m\i\b\l\2\q\3\p\k\f\i\j\i\n\c\e\d\p\m\j\7\f\3\z\i\d\j\5\0\t\l\0\y\1\s\x\1\x\w\y\s\w\i\l\n\0\p\2\o\g\t\v\r\2\t\n\2\k\2\a\q\o\a\v\t\f\n\i\9\a\l\3\s\d\v\c\6\2\t\g\t\e\w\m\a\i\7\z\6\a\j\g\m\s\k\1\p\x\m\4\1\5\n\8\n\1\0\7\h\3\1\y\x\d\c\v\b\u\m\p\p\m\h\1\m\j\a\z\7\5\r\6\s\y\3\x\x\8\3\x\k\e\i\9\z\z\6\g\i\0\h\3\a\e\u\p\f\2\y\y\p\z\o\o\w\r\l\t\e\s\7\n\g\k\q\d\f\r\c\m\g\s\d\9\6\x\i\c\a\9\7\n\2\s\5\g\u\o\y\y\b\v\u\m\l\q\d\q\o\6\x\j\f\a\1\i\r\0\8\n\r\s\w\d\6\d\d\1\f\x\6\p\3\p\i\e\p\a\t\t\u\p\v\q\t\u\i\h\q\i\v\6\6\6\r\p\a\j\e\n\z\r\l\e\d\7\2\g\k\0\p\d\p\g\p\e\4\d\k\i\6\o\t\3\v\0\0\z\q\0\l\4\4\o\v\y\7\g\5\c\t\1\j\5\1\u\l\0\w\z\2\q\u\0\w\l\b\m\s\s\n\z\f\j\h\d\n\0\y\4\h\k\0\n\v\5\g\w\m\3\b\q\5\h\k\j\x\c\v\y\8\e\2\z\o\w\3\2\8\x\v\g\y\o\w\d\u\5\8\e\g\n\h\e\a\l\w\d\m\c\i\i\5\7\y\v\1\m\r\p\f\r\i\3\v\c\3\o\8\f\3\k\w\e\n\j\y\w\t\5\8\4\m\9\a\e\f\l\i\y ]] 00:08:57.005 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.005 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:57.005 [2024-11-28 11:39:27.083335] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:57.005 [2024-11-28 11:39:27.083433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74668 ] 00:08:57.264 [2024-11-28 11:39:27.208410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:57.264 [2024-11-28 11:39:27.235840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.264 [2024-11-28 11:39:27.282228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.264 [2024-11-28 11:39:27.340480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.264  [2024-11-28T11:39:27.649Z] Copying: 512/512 [B] (average 166 kBps) 00:08:57.523 00:08:57.524 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gyz92eqmhy084n9s7wcwvwcxmvqibftis8k2mmuhidfsnialnt7og7pknka8ys94oij2y1614t602z5jv3ntz64lg67cmibl2q3pkfijincedpmj7f3zidj50tl0y1sx1xwyswiln0p2ogtvr2tn2k2aqoavtfni9al3sdvc62tgtewmai7z6ajgmsk1pxm415n8n107h31yxdcvbumppmh1mjaz75r6sy3xx83xkei9zz6gi0h3aeupf2yypzoowrltes7ngkqdfrcmgsd96xica97n2s5guoyybvumlqdqo6xjfa1ir08nrswd6dd1fx6p3piepattupvqtuihqiv666rpajenzrled72gk0pdpgpe4dki6ot3v00zq0l44ovy7g5ct1j51ul0wz2qu0wlbmssnzfjhdn0y4hk0nv5gwm3bq5hkjxcvy8e2zow328xvgyowdu58egnhealwdmcii57yv1mrpfri3vc3o8f3kwenjywt584m9aefliy == \g\y\z\9\2\e\q\m\h\y\0\8\4\n\9\s\7\w\c\w\v\w\c\x\m\v\q\i\b\f\t\i\s\8\k\2\m\m\u\h\i\d\f\s\n\i\a\l\n\t\7\o\g\7\p\k\n\k\a\8\y\s\9\4\o\i\j\2\y\1\6\1\4\t\6\0\2\z\5\j\v\3\n\t\z\6\4\l\g\6\7\c\m\i\b\l\2\q\3\p\k\f\i\j\i\n\c\e\d\p\m\j\7\f\3\z\i\d\j\5\0\t\l\0\y\1\s\x\1\x\w\y\s\w\i\l\n\0\p\2\o\g\t\v\r\2\t\n\2\k\2\a\q\o\a\v\t\f\n\i\9\a\l\3\s\d\v\c\6\2\t\g\t\e\w\m\a\i\7\z\6\a\j\g\m\s\k\1\p\x\m\4\1\5\n\8\n\1\0\7\h\3\1\y\x\d\c\v\b\u\m\p\p\m\h\1\m\j\a\z\7\5\r\6\s\y\3\x\x\8\3\x\k\e\i\9\z\z\6\g\i\0\h\3\a\e\u\p\f\2\y\y\p\z\o\o\w\r\l\t\e\s\7\n\g\k\q\d\f\r\c\m\g\s\d\9\6\x\i\c\a\9\7\n\2\s\5\g\u\o\y\y\b\v\u\m\l\q\d\q\o\6\x\j\f\a\1\i\r\0\8\n\r\s\w\d\6\d\d\1\f\x\6\p\3\p\i\e\p\a\t\t\u\p\v\q\t\u\i\h\q\i\v\6\6\6\r\p\a\j\e\n\z\r\l\e\d\7\2\g\k\0\p\d\p\g\p\e\4\d\k\i\6\o\t\3\v\0\0\z\q\0\l\4\4\o\v\y\7\g\5\c\t\1\j\5\1\u\l\0\w\z\2\q\u\0\w\l\b\m\s\s\n\z\f\j\h\d\n\0\y\4\h\k\0\n\v\5\g\w\m\3\b\q\5\h\k\j\x\c\v\y\8\e\2\z\o\w\3\2\8\x\v\g\y\o\w\d\u\5\8\e\g\n\h\e\a\l\w\d\m\c\i\i\5\7\y\v\1\m\r\p\f\r\i\3\v\c\3\o\8\f\3\k\w\e\n\j\y\w\t\5\8\4\m\9\a\e\f\l\i\y ]] 00:08:57.524 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.524 11:39:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:57.524 [2024-11-28 11:39:27.642778] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:57.524 [2024-11-28 11:39:27.642877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:08:57.783 [2024-11-28 11:39:27.768951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:57.783 [2024-11-28 11:39:27.794612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.783 [2024-11-28 11:39:27.837205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.783 [2024-11-28 11:39:27.896192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.042  [2024-11-28T11:39:28.168Z] Copying: 512/512 [B] (average 250 kBps) 00:08:58.042 00:08:58.042 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gyz92eqmhy084n9s7wcwvwcxmvqibftis8k2mmuhidfsnialnt7og7pknka8ys94oij2y1614t602z5jv3ntz64lg67cmibl2q3pkfijincedpmj7f3zidj50tl0y1sx1xwyswiln0p2ogtvr2tn2k2aqoavtfni9al3sdvc62tgtewmai7z6ajgmsk1pxm415n8n107h31yxdcvbumppmh1mjaz75r6sy3xx83xkei9zz6gi0h3aeupf2yypzoowrltes7ngkqdfrcmgsd96xica97n2s5guoyybvumlqdqo6xjfa1ir08nrswd6dd1fx6p3piepattupvqtuihqiv666rpajenzrled72gk0pdpgpe4dki6ot3v00zq0l44ovy7g5ct1j51ul0wz2qu0wlbmssnzfjhdn0y4hk0nv5gwm3bq5hkjxcvy8e2zow328xvgyowdu58egnhealwdmcii57yv1mrpfri3vc3o8f3kwenjywt584m9aefliy == \g\y\z\9\2\e\q\m\h\y\0\8\4\n\9\s\7\w\c\w\v\w\c\x\m\v\q\i\b\f\t\i\s\8\k\2\m\m\u\h\i\d\f\s\n\i\a\l\n\t\7\o\g\7\p\k\n\k\a\8\y\s\9\4\o\i\j\2\y\1\6\1\4\t\6\0\2\z\5\j\v\3\n\t\z\6\4\l\g\6\7\c\m\i\b\l\2\q\3\p\k\f\i\j\i\n\c\e\d\p\m\j\7\f\3\z\i\d\j\5\0\t\l\0\y\1\s\x\1\x\w\y\s\w\i\l\n\0\p\2\o\g\t\v\r\2\t\n\2\k\2\a\q\o\a\v\t\f\n\i\9\a\l\3\s\d\v\c\6\2\t\g\t\e\w\m\a\i\7\z\6\a\j\g\m\s\k\1\p\x\m\4\1\5\n\8\n\1\0\7\h\3\1\y\x\d\c\v\b\u\m\p\p\m\h\1\m\j\a\z\7\5\r\6\s\y\3\x\x\8\3\x\k\e\i\9\z\z\6\g\i\0\h\3\a\e\u\p\f\2\y\y\p\z\o\o\w\r\l\t\e\s\7\n\g\k\q\d\f\r\c\m\g\s\d\9\6\x\i\c\a\9\7\n\2\s\5\g\u\o\y\y\b\v\u\m\l\q\d\q\o\6\x\j\f\a\1\i\r\0\8\n\r\s\w\d\6\d\d\1\f\x\6\p\3\p\i\e\p\a\t\t\u\p\v\q\t\u\i\h\q\i\v\6\6\6\r\p\a\j\e\n\z\r\l\e\d\7\2\g\k\0\p\d\p\g\p\e\4\d\k\i\6\o\t\3\v\0\0\z\q\0\l\4\4\o\v\y\7\g\5\c\t\1\j\5\1\u\l\0\w\z\2\q\u\0\w\l\b\m\s\s\n\z\f\j\h\d\n\0\y\4\h\k\0\n\v\5\g\w\m\3\b\q\5\h\k\j\x\c\v\y\8\e\2\z\o\w\3\2\8\x\v\g\y\o\w\d\u\5\8\e\g\n\h\e\a\l\w\d\m\c\i\i\5\7\y\v\1\m\r\p\f\r\i\3\v\c\3\o\8\f\3\k\w\e\n\j\y\w\t\5\8\4\m\9\a\e\f\l\i\y ]] 00:08:58.042 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:58.042 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:58.043 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:58.043 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:58.302 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.302 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.302 [2024-11-28 11:39:28.232183] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:58.302 [2024-11-28 11:39:28.232492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74683 ] 00:08:58.302 [2024-11-28 11:39:28.358860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.302 [2024-11-28 11:39:28.387835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.561 [2024-11-28 11:39:28.435598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.561 [2024-11-28 11:39:28.491305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.561  [2024-11-28T11:39:28.946Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.820 00:08:58.820 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ajj767oaddazzpqa128zctiqhfb06qyo68h4pq248z4rf24ebaivcupy15kooaah3mkxinqima6o0e2tubuihd9jng75fvohtp4yomeo02cdhpw2qy4xrbpos69t78bp3oi80idtlg8xnx4dur2g59ee9avc6bd00jpsl7kiqn8bfmio36hzl1ya9205chwld34vxnqrd4vl5tp6yahwzy4ol6s8m28kz410t842m7sbxr4u6p5m3quah70w3rrrmu563mwcv8g0s07zla6lhqawv5v3go82olwu687ru7dplsb7aazz4m7g8va71angz4sxhtsukq60o5kk3jx1ttl6vsibcx9gadt8fdc96zj88phhomb77gt5vcg9dbl9ft4wf8dis26k0ckn19kl9a0tqkm7ht9d9om0f53ft3mzz7qpnffto5plnklpbrq9hw4c3oosqwg8axrcbk4zd3zk0ib4dwl2dx6lc4avcwkl5ugn4jp2fvh0wh4uznv1 == \a\j\j\7\6\7\o\a\d\d\a\z\z\p\q\a\1\2\8\z\c\t\i\q\h\f\b\0\6\q\y\o\6\8\h\4\p\q\2\4\8\z\4\r\f\2\4\e\b\a\i\v\c\u\p\y\1\5\k\o\o\a\a\h\3\m\k\x\i\n\q\i\m\a\6\o\0\e\2\t\u\b\u\i\h\d\9\j\n\g\7\5\f\v\o\h\t\p\4\y\o\m\e\o\0\2\c\d\h\p\w\2\q\y\4\x\r\b\p\o\s\6\9\t\7\8\b\p\3\o\i\8\0\i\d\t\l\g\8\x\n\x\4\d\u\r\2\g\5\9\e\e\9\a\v\c\6\b\d\0\0\j\p\s\l\7\k\i\q\n\8\b\f\m\i\o\3\6\h\z\l\1\y\a\9\2\0\5\c\h\w\l\d\3\4\v\x\n\q\r\d\4\v\l\5\t\p\6\y\a\h\w\z\y\4\o\l\6\s\8\m\2\8\k\z\4\1\0\t\8\4\2\m\7\s\b\x\r\4\u\6\p\5\m\3\q\u\a\h\7\0\w\3\r\r\r\m\u\5\6\3\m\w\c\v\8\g\0\s\0\7\z\l\a\6\l\h\q\a\w\v\5\v\3\g\o\8\2\o\l\w\u\6\8\7\r\u\7\d\p\l\s\b\7\a\a\z\z\4\m\7\g\8\v\a\7\1\a\n\g\z\4\s\x\h\t\s\u\k\q\6\0\o\5\k\k\3\j\x\1\t\t\l\6\v\s\i\b\c\x\9\g\a\d\t\8\f\d\c\9\6\z\j\8\8\p\h\h\o\m\b\7\7\g\t\5\v\c\g\9\d\b\l\9\f\t\4\w\f\8\d\i\s\2\6\k\0\c\k\n\1\9\k\l\9\a\0\t\q\k\m\7\h\t\9\d\9\o\m\0\f\5\3\f\t\3\m\z\z\7\q\p\n\f\f\t\o\5\p\l\n\k\l\p\b\r\q\9\h\w\4\c\3\o\o\s\q\w\g\8\a\x\r\c\b\k\4\z\d\3\z\k\0\i\b\4\d\w\l\2\d\x\6\l\c\4\a\v\c\w\k\l\5\u\g\n\4\j\p\2\f\v\h\0\w\h\4\u\z\n\v\1 ]] 00:08:58.820 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.820 11:39:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:58.820 [2024-11-28 11:39:28.800507] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:58.820 [2024-11-28 11:39:28.800604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74696 ] 00:08:58.820 [2024-11-28 11:39:28.925556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.078 [2024-11-28 11:39:28.953228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.078 [2024-11-28 11:39:28.995978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.078 [2024-11-28 11:39:29.051878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.078  [2024-11-28T11:39:29.463Z] Copying: 512/512 [B] (average 500 kBps) 00:08:59.337 00:08:59.337 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ajj767oaddazzpqa128zctiqhfb06qyo68h4pq248z4rf24ebaivcupy15kooaah3mkxinqima6o0e2tubuihd9jng75fvohtp4yomeo02cdhpw2qy4xrbpos69t78bp3oi80idtlg8xnx4dur2g59ee9avc6bd00jpsl7kiqn8bfmio36hzl1ya9205chwld34vxnqrd4vl5tp6yahwzy4ol6s8m28kz410t842m7sbxr4u6p5m3quah70w3rrrmu563mwcv8g0s07zla6lhqawv5v3go82olwu687ru7dplsb7aazz4m7g8va71angz4sxhtsukq60o5kk3jx1ttl6vsibcx9gadt8fdc96zj88phhomb77gt5vcg9dbl9ft4wf8dis26k0ckn19kl9a0tqkm7ht9d9om0f53ft3mzz7qpnffto5plnklpbrq9hw4c3oosqwg8axrcbk4zd3zk0ib4dwl2dx6lc4avcwkl5ugn4jp2fvh0wh4uznv1 == \a\j\j\7\6\7\o\a\d\d\a\z\z\p\q\a\1\2\8\z\c\t\i\q\h\f\b\0\6\q\y\o\6\8\h\4\p\q\2\4\8\z\4\r\f\2\4\e\b\a\i\v\c\u\p\y\1\5\k\o\o\a\a\h\3\m\k\x\i\n\q\i\m\a\6\o\0\e\2\t\u\b\u\i\h\d\9\j\n\g\7\5\f\v\o\h\t\p\4\y\o\m\e\o\0\2\c\d\h\p\w\2\q\y\4\x\r\b\p\o\s\6\9\t\7\8\b\p\3\o\i\8\0\i\d\t\l\g\8\x\n\x\4\d\u\r\2\g\5\9\e\e\9\a\v\c\6\b\d\0\0\j\p\s\l\7\k\i\q\n\8\b\f\m\i\o\3\6\h\z\l\1\y\a\9\2\0\5\c\h\w\l\d\3\4\v\x\n\q\r\d\4\v\l\5\t\p\6\y\a\h\w\z\y\4\o\l\6\s\8\m\2\8\k\z\4\1\0\t\8\4\2\m\7\s\b\x\r\4\u\6\p\5\m\3\q\u\a\h\7\0\w\3\r\r\r\m\u\5\6\3\m\w\c\v\8\g\0\s\0\7\z\l\a\6\l\h\q\a\w\v\5\v\3\g\o\8\2\o\l\w\u\6\8\7\r\u\7\d\p\l\s\b\7\a\a\z\z\4\m\7\g\8\v\a\7\1\a\n\g\z\4\s\x\h\t\s\u\k\q\6\0\o\5\k\k\3\j\x\1\t\t\l\6\v\s\i\b\c\x\9\g\a\d\t\8\f\d\c\9\6\z\j\8\8\p\h\h\o\m\b\7\7\g\t\5\v\c\g\9\d\b\l\9\f\t\4\w\f\8\d\i\s\2\6\k\0\c\k\n\1\9\k\l\9\a\0\t\q\k\m\7\h\t\9\d\9\o\m\0\f\5\3\f\t\3\m\z\z\7\q\p\n\f\f\t\o\5\p\l\n\k\l\p\b\r\q\9\h\w\4\c\3\o\o\s\q\w\g\8\a\x\r\c\b\k\4\z\d\3\z\k\0\i\b\4\d\w\l\2\d\x\6\l\c\4\a\v\c\w\k\l\5\u\g\n\4\j\p\2\f\v\h\0\w\h\4\u\z\n\v\1 ]] 00:08:59.337 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.337 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:59.337 [2024-11-28 11:39:29.358384] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:59.337 [2024-11-28 11:39:29.358519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74704 ] 00:08:59.597 [2024-11-28 11:39:29.484189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.597 [2024-11-28 11:39:29.513737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.597 [2024-11-28 11:39:29.552733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.597 [2024-11-28 11:39:29.613638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.597  [2024-11-28T11:39:29.982Z] Copying: 512/512 [B] (average 166 kBps) 00:08:59.856 00:08:59.856 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ajj767oaddazzpqa128zctiqhfb06qyo68h4pq248z4rf24ebaivcupy15kooaah3mkxinqima6o0e2tubuihd9jng75fvohtp4yomeo02cdhpw2qy4xrbpos69t78bp3oi80idtlg8xnx4dur2g59ee9avc6bd00jpsl7kiqn8bfmio36hzl1ya9205chwld34vxnqrd4vl5tp6yahwzy4ol6s8m28kz410t842m7sbxr4u6p5m3quah70w3rrrmu563mwcv8g0s07zla6lhqawv5v3go82olwu687ru7dplsb7aazz4m7g8va71angz4sxhtsukq60o5kk3jx1ttl6vsibcx9gadt8fdc96zj88phhomb77gt5vcg9dbl9ft4wf8dis26k0ckn19kl9a0tqkm7ht9d9om0f53ft3mzz7qpnffto5plnklpbrq9hw4c3oosqwg8axrcbk4zd3zk0ib4dwl2dx6lc4avcwkl5ugn4jp2fvh0wh4uznv1 == \a\j\j\7\6\7\o\a\d\d\a\z\z\p\q\a\1\2\8\z\c\t\i\q\h\f\b\0\6\q\y\o\6\8\h\4\p\q\2\4\8\z\4\r\f\2\4\e\b\a\i\v\c\u\p\y\1\5\k\o\o\a\a\h\3\m\k\x\i\n\q\i\m\a\6\o\0\e\2\t\u\b\u\i\h\d\9\j\n\g\7\5\f\v\o\h\t\p\4\y\o\m\e\o\0\2\c\d\h\p\w\2\q\y\4\x\r\b\p\o\s\6\9\t\7\8\b\p\3\o\i\8\0\i\d\t\l\g\8\x\n\x\4\d\u\r\2\g\5\9\e\e\9\a\v\c\6\b\d\0\0\j\p\s\l\7\k\i\q\n\8\b\f\m\i\o\3\6\h\z\l\1\y\a\9\2\0\5\c\h\w\l\d\3\4\v\x\n\q\r\d\4\v\l\5\t\p\6\y\a\h\w\z\y\4\o\l\6\s\8\m\2\8\k\z\4\1\0\t\8\4\2\m\7\s\b\x\r\4\u\6\p\5\m\3\q\u\a\h\7\0\w\3\r\r\r\m\u\5\6\3\m\w\c\v\8\g\0\s\0\7\z\l\a\6\l\h\q\a\w\v\5\v\3\g\o\8\2\o\l\w\u\6\8\7\r\u\7\d\p\l\s\b\7\a\a\z\z\4\m\7\g\8\v\a\7\1\a\n\g\z\4\s\x\h\t\s\u\k\q\6\0\o\5\k\k\3\j\x\1\t\t\l\6\v\s\i\b\c\x\9\g\a\d\t\8\f\d\c\9\6\z\j\8\8\p\h\h\o\m\b\7\7\g\t\5\v\c\g\9\d\b\l\9\f\t\4\w\f\8\d\i\s\2\6\k\0\c\k\n\1\9\k\l\9\a\0\t\q\k\m\7\h\t\9\d\9\o\m\0\f\5\3\f\t\3\m\z\z\7\q\p\n\f\f\t\o\5\p\l\n\k\l\p\b\r\q\9\h\w\4\c\3\o\o\s\q\w\g\8\a\x\r\c\b\k\4\z\d\3\z\k\0\i\b\4\d\w\l\2\d\x\6\l\c\4\a\v\c\w\k\l\5\u\g\n\4\j\p\2\f\v\h\0\w\h\4\u\z\n\v\1 ]] 00:08:59.856 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.856 11:39:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:59.856 [2024-11-28 11:39:29.918010] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:59.856 [2024-11-28 11:39:29.918106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74711 ] 00:09:00.115 [2024-11-28 11:39:30.043454] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.115 [2024-11-28 11:39:30.068373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.116 [2024-11-28 11:39:30.108096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.116 [2024-11-28 11:39:30.163968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.116  [2024-11-28T11:39:30.501Z] Copying: 512/512 [B] (average 500 kBps) 00:09:00.375 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ajj767oaddazzpqa128zctiqhfb06qyo68h4pq248z4rf24ebaivcupy15kooaah3mkxinqima6o0e2tubuihd9jng75fvohtp4yomeo02cdhpw2qy4xrbpos69t78bp3oi80idtlg8xnx4dur2g59ee9avc6bd00jpsl7kiqn8bfmio36hzl1ya9205chwld34vxnqrd4vl5tp6yahwzy4ol6s8m28kz410t842m7sbxr4u6p5m3quah70w3rrrmu563mwcv8g0s07zla6lhqawv5v3go82olwu687ru7dplsb7aazz4m7g8va71angz4sxhtsukq60o5kk3jx1ttl6vsibcx9gadt8fdc96zj88phhomb77gt5vcg9dbl9ft4wf8dis26k0ckn19kl9a0tqkm7ht9d9om0f53ft3mzz7qpnffto5plnklpbrq9hw4c3oosqwg8axrcbk4zd3zk0ib4dwl2dx6lc4avcwkl5ugn4jp2fvh0wh4uznv1 == \a\j\j\7\6\7\o\a\d\d\a\z\z\p\q\a\1\2\8\z\c\t\i\q\h\f\b\0\6\q\y\o\6\8\h\4\p\q\2\4\8\z\4\r\f\2\4\e\b\a\i\v\c\u\p\y\1\5\k\o\o\a\a\h\3\m\k\x\i\n\q\i\m\a\6\o\0\e\2\t\u\b\u\i\h\d\9\j\n\g\7\5\f\v\o\h\t\p\4\y\o\m\e\o\0\2\c\d\h\p\w\2\q\y\4\x\r\b\p\o\s\6\9\t\7\8\b\p\3\o\i\8\0\i\d\t\l\g\8\x\n\x\4\d\u\r\2\g\5\9\e\e\9\a\v\c\6\b\d\0\0\j\p\s\l\7\k\i\q\n\8\b\f\m\i\o\3\6\h\z\l\1\y\a\9\2\0\5\c\h\w\l\d\3\4\v\x\n\q\r\d\4\v\l\5\t\p\6\y\a\h\w\z\y\4\o\l\6\s\8\m\2\8\k\z\4\1\0\t\8\4\2\m\7\s\b\x\r\4\u\6\p\5\m\3\q\u\a\h\7\0\w\3\r\r\r\m\u\5\6\3\m\w\c\v\8\g\0\s\0\7\z\l\a\6\l\h\q\a\w\v\5\v\3\g\o\8\2\o\l\w\u\6\8\7\r\u\7\d\p\l\s\b\7\a\a\z\z\4\m\7\g\8\v\a\7\1\a\n\g\z\4\s\x\h\t\s\u\k\q\6\0\o\5\k\k\3\j\x\1\t\t\l\6\v\s\i\b\c\x\9\g\a\d\t\8\f\d\c\9\6\z\j\8\8\p\h\h\o\m\b\7\7\g\t\5\v\c\g\9\d\b\l\9\f\t\4\w\f\8\d\i\s\2\6\k\0\c\k\n\1\9\k\l\9\a\0\t\q\k\m\7\h\t\9\d\9\o\m\0\f\5\3\f\t\3\m\z\z\7\q\p\n\f\f\t\o\5\p\l\n\k\l\p\b\r\q\9\h\w\4\c\3\o\o\s\q\w\g\8\a\x\r\c\b\k\4\z\d\3\z\k\0\i\b\4\d\w\l\2\d\x\6\l\c\4\a\v\c\w\k\l\5\u\g\n\4\j\p\2\f\v\h\0\w\h\4\u\z\n\v\1 ]] 00:09:00.375 00:09:00.375 real 0m4.475s 00:09:00.375 user 0m2.282s 00:09:00.375 sys 0m1.174s 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.375 ************************************ 00:09:00.375 END TEST dd_flags_misc_forced_aio 00:09:00.375 ************************************ 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:00.375 ************************************ 00:09:00.375 END TEST spdk_dd_posix 00:09:00.375 ************************************ 00:09:00.375 00:09:00.375 real 0m20.319s 00:09:00.375 user 0m9.373s 00:09:00.375 sys 0m6.890s 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.375 11:39:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 11:39:30 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:00.375 11:39:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.375 11:39:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.375 11:39:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:00.375 ************************************ 00:09:00.375 START TEST spdk_dd_malloc 00:09:00.375 ************************************ 00:09:00.375 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:00.634 * Looking for test storage... 00:09:00.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:00.634 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.635 --rc genhtml_branch_coverage=1 00:09:00.635 --rc genhtml_function_coverage=1 00:09:00.635 --rc genhtml_legend=1 00:09:00.635 --rc geninfo_all_blocks=1 00:09:00.635 --rc geninfo_unexecuted_blocks=1 00:09:00.635 00:09:00.635 ' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.635 --rc genhtml_branch_coverage=1 00:09:00.635 --rc genhtml_function_coverage=1 00:09:00.635 --rc genhtml_legend=1 00:09:00.635 --rc geninfo_all_blocks=1 00:09:00.635 --rc geninfo_unexecuted_blocks=1 00:09:00.635 00:09:00.635 ' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.635 --rc genhtml_branch_coverage=1 00:09:00.635 --rc genhtml_function_coverage=1 00:09:00.635 --rc genhtml_legend=1 00:09:00.635 --rc geninfo_all_blocks=1 00:09:00.635 --rc geninfo_unexecuted_blocks=1 00:09:00.635 00:09:00.635 ' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.635 --rc genhtml_branch_coverage=1 00:09:00.635 --rc genhtml_function_coverage=1 00:09:00.635 --rc genhtml_legend=1 00:09:00.635 --rc geninfo_all_blocks=1 00:09:00.635 --rc geninfo_unexecuted_blocks=1 00:09:00.635 00:09:00.635 ' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:00.635 ************************************ 00:09:00.635 START TEST dd_malloc_copy 00:09:00.635 ************************************ 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:00.635 11:39:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:00.895 { 00:09:00.895 "subsystems": [ 00:09:00.895 { 00:09:00.895 "subsystem": "bdev", 00:09:00.895 "config": [ 00:09:00.895 { 00:09:00.895 "params": { 00:09:00.895 "block_size": 512, 00:09:00.895 "num_blocks": 1048576, 00:09:00.895 "name": "malloc0" 00:09:00.895 }, 00:09:00.895 "method": "bdev_malloc_create" 00:09:00.895 }, 00:09:00.895 { 00:09:00.895 "params": { 00:09:00.895 "block_size": 512, 00:09:00.895 "num_blocks": 1048576, 00:09:00.895 "name": "malloc1" 00:09:00.895 }, 00:09:00.895 "method": "bdev_malloc_create" 00:09:00.895 }, 00:09:00.895 { 00:09:00.895 "method": "bdev_wait_for_examine" 00:09:00.895 } 00:09:00.895 ] 00:09:00.895 } 00:09:00.895 ] 00:09:00.895 } 00:09:00.895 [2024-11-28 11:39:30.790812] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:00.895 [2024-11-28 11:39:30.790910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74788 ] 00:09:00.895 [2024-11-28 11:39:30.916254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.895 [2024-11-28 11:39:30.941735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.895 [2024-11-28 11:39:30.988491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.154 [2024-11-28 11:39:31.048447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.531  [2024-11-28T11:39:33.595Z] Copying: 195/512 [MB] (195 MBps) [2024-11-28T11:39:34.163Z] Copying: 399/512 [MB] (203 MBps) [2024-11-28T11:39:34.731Z] Copying: 512/512 [MB] (average 201 MBps) 00:09:04.605 00:09:04.605 11:39:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:04.605 11:39:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:04.605 11:39:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:04.605 11:39:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:04.605 { 00:09:04.605 "subsystems": [ 00:09:04.605 { 00:09:04.605 "subsystem": "bdev", 00:09:04.605 "config": [ 00:09:04.605 { 00:09:04.605 "params": { 00:09:04.605 "block_size": 512, 00:09:04.605 "num_blocks": 1048576, 00:09:04.605 "name": "malloc0" 00:09:04.605 }, 00:09:04.605 "method": "bdev_malloc_create" 00:09:04.605 }, 00:09:04.605 { 00:09:04.605 "params": { 00:09:04.605 "block_size": 512, 00:09:04.605 "num_blocks": 1048576, 00:09:04.605 "name": "malloc1" 00:09:04.605 }, 00:09:04.605 "method": "bdev_malloc_create" 00:09:04.605 }, 00:09:04.605 { 00:09:04.605 "method": "bdev_wait_for_examine" 00:09:04.605 } 00:09:04.605 ] 00:09:04.605 } 00:09:04.605 ] 00:09:04.605 } 00:09:04.605 [2024-11-28 11:39:34.627583] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:04.605 [2024-11-28 11:39:34.627677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74841 ] 00:09:04.865 [2024-11-28 11:39:34.752829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.865 [2024-11-28 11:39:34.783435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.865 [2024-11-28 11:39:34.828652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.865 [2024-11-28 11:39:34.888262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.240  [2024-11-28T11:39:37.302Z] Copying: 200/512 [MB] (200 MBps) [2024-11-28T11:39:37.868Z] Copying: 407/512 [MB] (206 MBps) [2024-11-28T11:39:38.436Z] Copying: 512/512 [MB] (average 202 MBps) 00:09:08.310 00:09:08.310 00:09:08.310 real 0m7.639s 00:09:08.310 user 0m6.556s 00:09:08.310 sys 0m0.893s 00:09:08.310 ************************************ 00:09:08.310 END TEST dd_malloc_copy 00:09:08.310 ************************************ 00:09:08.310 11:39:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.310 11:39:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:08.310 ************************************ 00:09:08.310 END TEST spdk_dd_malloc 00:09:08.310 ************************************ 00:09:08.311 00:09:08.311 real 0m7.911s 00:09:08.311 user 0m6.716s 00:09:08.311 sys 0m1.007s 00:09:08.311 11:39:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.311 11:39:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:08.569 11:39:38 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:08.569 11:39:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:08.569 11:39:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.569 11:39:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:08.569 ************************************ 00:09:08.569 START TEST spdk_dd_bdev_to_bdev 00:09:08.569 ************************************ 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:08.569 * Looking for test storage... 00:09:08.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.569 --rc genhtml_branch_coverage=1 00:09:08.569 --rc genhtml_function_coverage=1 00:09:08.569 --rc genhtml_legend=1 00:09:08.569 --rc geninfo_all_blocks=1 00:09:08.569 --rc geninfo_unexecuted_blocks=1 00:09:08.569 00:09:08.569 ' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.569 --rc genhtml_branch_coverage=1 00:09:08.569 --rc genhtml_function_coverage=1 00:09:08.569 --rc genhtml_legend=1 00:09:08.569 --rc geninfo_all_blocks=1 00:09:08.569 --rc geninfo_unexecuted_blocks=1 00:09:08.569 00:09:08.569 ' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.569 --rc genhtml_branch_coverage=1 00:09:08.569 --rc genhtml_function_coverage=1 00:09:08.569 --rc genhtml_legend=1 00:09:08.569 --rc geninfo_all_blocks=1 00:09:08.569 --rc geninfo_unexecuted_blocks=1 00:09:08.569 00:09:08.569 ' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.569 --rc genhtml_branch_coverage=1 00:09:08.569 --rc genhtml_function_coverage=1 00:09:08.569 --rc genhtml_legend=1 00:09:08.569 --rc geninfo_all_blocks=1 00:09:08.569 --rc geninfo_unexecuted_blocks=1 00:09:08.569 00:09:08.569 ' 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.569 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.570 ************************************ 00:09:08.570 START TEST dd_inflate_file 00:09:08.570 ************************************ 00:09:08.570 11:39:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:08.829 [2024-11-28 11:39:38.697568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:08.829 [2024-11-28 11:39:38.697795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74959 ] 00:09:08.829 [2024-11-28 11:39:38.818763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.829 [2024-11-28 11:39:38.845479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.829 [2024-11-28 11:39:38.885645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.829 [2024-11-28 11:39:38.943165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.236  [2024-11-28T11:39:39.362Z] Copying: 64/64 [MB] (average 1454 MBps) 00:09:09.236 00:09:09.236 00:09:09.236 real 0m0.556s 00:09:09.236 user 0m0.299s 00:09:09.236 sys 0m0.320s 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:09.236 ************************************ 00:09:09.236 END TEST dd_inflate_file 00:09:09.236 ************************************ 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:09.236 ************************************ 00:09:09.236 START TEST dd_copy_to_out_bdev 00:09:09.236 ************************************ 00:09:09.236 11:39:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:09.496 { 00:09:09.496 "subsystems": [ 00:09:09.496 { 00:09:09.496 "subsystem": "bdev", 00:09:09.496 "config": [ 00:09:09.496 { 00:09:09.496 "params": { 00:09:09.496 "trtype": "pcie", 00:09:09.496 "traddr": "0000:00:10.0", 00:09:09.496 "name": "Nvme0" 00:09:09.496 }, 00:09:09.496 "method": "bdev_nvme_attach_controller" 00:09:09.496 }, 00:09:09.496 { 00:09:09.496 "params": { 00:09:09.496 "trtype": "pcie", 00:09:09.496 "traddr": "0000:00:11.0", 00:09:09.496 "name": "Nvme1" 00:09:09.496 }, 00:09:09.496 "method": "bdev_nvme_attach_controller" 00:09:09.496 }, 00:09:09.496 { 00:09:09.496 "method": "bdev_wait_for_examine" 00:09:09.496 } 00:09:09.496 ] 00:09:09.496 } 00:09:09.496 ] 00:09:09.496 } 00:09:09.496 [2024-11-28 11:39:39.320233] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:09.496 [2024-11-28 11:39:39.320490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74994 ] 00:09:09.496 [2024-11-28 11:39:39.446125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.496 [2024-11-28 11:39:39.473929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.496 [2024-11-28 11:39:39.518830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.496 [2024-11-28 11:39:39.577943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.874  [2024-11-28T11:39:41.000Z] Copying: 53/64 [MB] (53 MBps) [2024-11-28T11:39:41.258Z] Copying: 64/64 [MB] (average 54 MBps) 00:09:11.132 00:09:11.132 00:09:11.132 real 0m1.929s 00:09:11.132 user 0m1.673s 00:09:11.132 sys 0m1.563s 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:11.132 ************************************ 00:09:11.132 END TEST dd_copy_to_out_bdev 00:09:11.132 ************************************ 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.132 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:11.133 ************************************ 00:09:11.133 START TEST dd_offset_magic 00:09:11.133 ************************************ 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:11.133 11:39:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:11.392 [2024-11-28 11:39:41.302633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:11.392 [2024-11-28 11:39:41.302718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75032 ] 00:09:11.392 { 00:09:11.392 "subsystems": [ 00:09:11.392 { 00:09:11.392 "subsystem": "bdev", 00:09:11.392 "config": [ 00:09:11.392 { 00:09:11.392 "params": { 00:09:11.392 "trtype": "pcie", 00:09:11.392 "traddr": "0000:00:10.0", 00:09:11.392 "name": "Nvme0" 00:09:11.392 }, 00:09:11.392 "method": "bdev_nvme_attach_controller" 00:09:11.392 }, 00:09:11.392 { 00:09:11.392 "params": { 00:09:11.392 "trtype": "pcie", 00:09:11.392 "traddr": "0000:00:11.0", 00:09:11.392 "name": "Nvme1" 00:09:11.392 }, 00:09:11.392 "method": "bdev_nvme_attach_controller" 00:09:11.392 }, 00:09:11.392 { 00:09:11.392 "method": "bdev_wait_for_examine" 00:09:11.392 } 00:09:11.392 ] 00:09:11.392 } 00:09:11.392 ] 00:09:11.392 } 00:09:11.392 [2024-11-28 11:39:41.424205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:11.392 [2024-11-28 11:39:41.453608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.392 [2024-11-28 11:39:41.512717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.651 [2024-11-28 11:39:41.575341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.910  [2024-11-28T11:39:42.295Z] Copying: 65/65 [MB] (average 833 MBps) 00:09:12.169 00:09:12.169 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:12.169 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:12.169 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:12.169 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:12.169 [2024-11-28 11:39:42.142446] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:12.169 [2024-11-28 11:39:42.142578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75052 ] 00:09:12.169 { 00:09:12.169 "subsystems": [ 00:09:12.169 { 00:09:12.169 "subsystem": "bdev", 00:09:12.169 "config": [ 00:09:12.169 { 00:09:12.169 "params": { 00:09:12.169 "trtype": "pcie", 00:09:12.169 "traddr": "0000:00:10.0", 00:09:12.169 "name": "Nvme0" 00:09:12.169 }, 00:09:12.169 "method": "bdev_nvme_attach_controller" 00:09:12.169 }, 00:09:12.169 { 00:09:12.169 "params": { 00:09:12.169 "trtype": "pcie", 00:09:12.169 "traddr": "0000:00:11.0", 00:09:12.169 "name": "Nvme1" 00:09:12.169 }, 00:09:12.169 "method": "bdev_nvme_attach_controller" 00:09:12.169 }, 00:09:12.169 { 00:09:12.169 "method": "bdev_wait_for_examine" 00:09:12.169 } 00:09:12.169 ] 00:09:12.170 } 00:09:12.170 ] 00:09:12.170 } 00:09:12.170 [2024-11-28 11:39:42.268828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:12.428 [2024-11-28 11:39:42.297686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.428 [2024-11-28 11:39:42.344832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.428 [2024-11-28 11:39:42.412667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.687  [2024-11-28T11:39:42.813Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:12.687 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:12.687 11:39:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:12.945 { 00:09:12.945 "subsystems": [ 00:09:12.945 { 00:09:12.945 "subsystem": "bdev", 00:09:12.945 "config": [ 00:09:12.945 { 00:09:12.945 "params": { 00:09:12.945 "trtype": "pcie", 00:09:12.945 "traddr": "0000:00:10.0", 00:09:12.945 "name": "Nvme0" 00:09:12.945 }, 00:09:12.945 "method": "bdev_nvme_attach_controller" 00:09:12.945 }, 00:09:12.945 { 00:09:12.945 "params": { 00:09:12.945 "trtype": "pcie", 00:09:12.945 "traddr": "0000:00:11.0", 00:09:12.945 "name": "Nvme1" 00:09:12.945 }, 00:09:12.945 "method": "bdev_nvme_attach_controller" 00:09:12.945 }, 00:09:12.945 { 00:09:12.945 "method": "bdev_wait_for_examine" 00:09:12.945 } 00:09:12.945 ] 00:09:12.945 } 00:09:12.945 ] 00:09:12.945 } 00:09:12.945 [2024-11-28 11:39:42.858733] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:12.945 [2024-11-28 11:39:42.858872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75074 ] 00:09:12.945 [2024-11-28 11:39:42.984205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:12.945 [2024-11-28 11:39:43.008394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.945 [2024-11-28 11:39:43.051706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.203 [2024-11-28 11:39:43.114290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.461  [2024-11-28T11:39:43.845Z] Copying: 65/65 [MB] (average 915 MBps) 00:09:13.719 00:09:13.719 11:39:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:13.719 11:39:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:13.719 11:39:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:13.719 11:39:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:13.719 [2024-11-28 11:39:43.649201] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:13.719 [2024-11-28 11:39:43.649295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75094 ] 00:09:13.719 { 00:09:13.719 "subsystems": [ 00:09:13.719 { 00:09:13.719 "subsystem": "bdev", 00:09:13.719 "config": [ 00:09:13.719 { 00:09:13.719 "params": { 00:09:13.719 "trtype": "pcie", 00:09:13.719 "traddr": "0000:00:10.0", 00:09:13.719 "name": "Nvme0" 00:09:13.719 }, 00:09:13.719 "method": "bdev_nvme_attach_controller" 00:09:13.719 }, 00:09:13.719 { 00:09:13.719 "params": { 00:09:13.719 "trtype": "pcie", 00:09:13.719 "traddr": "0000:00:11.0", 00:09:13.719 "name": "Nvme1" 00:09:13.719 }, 00:09:13.719 "method": "bdev_nvme_attach_controller" 00:09:13.719 }, 00:09:13.719 { 00:09:13.719 "method": "bdev_wait_for_examine" 00:09:13.719 } 00:09:13.719 ] 00:09:13.719 } 00:09:13.719 ] 00:09:13.719 } 00:09:13.719 [2024-11-28 11:39:43.771431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.719 [2024-11-28 11:39:43.797494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.977 [2024-11-28 11:39:43.846745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.977 [2024-11-28 11:39:43.912018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.977  [2024-11-28T11:39:44.362Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:14.236 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:14.236 00:09:14.236 real 0m3.051s 00:09:14.236 user 0m2.135s 00:09:14.236 sys 0m1.009s 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:14.236 ************************************ 00:09:14.236 END TEST dd_offset_magic 00:09:14.236 ************************************ 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:14.236 11:39:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:14.515 [2024-11-28 11:39:44.401371] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:14.515 [2024-11-28 11:39:44.401469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75120 ] 00:09:14.515 { 00:09:14.515 "subsystems": [ 00:09:14.515 { 00:09:14.515 "subsystem": "bdev", 00:09:14.515 "config": [ 00:09:14.515 { 00:09:14.515 "params": { 00:09:14.515 "trtype": "pcie", 00:09:14.515 "traddr": "0000:00:10.0", 00:09:14.515 "name": "Nvme0" 00:09:14.515 }, 00:09:14.515 "method": "bdev_nvme_attach_controller" 00:09:14.515 }, 00:09:14.515 { 00:09:14.515 "params": { 00:09:14.515 "trtype": "pcie", 00:09:14.515 "traddr": "0000:00:11.0", 00:09:14.515 "name": "Nvme1" 00:09:14.515 }, 00:09:14.515 "method": "bdev_nvme_attach_controller" 00:09:14.515 }, 00:09:14.515 { 00:09:14.515 "method": "bdev_wait_for_examine" 00:09:14.515 } 00:09:14.515 ] 00:09:14.515 } 00:09:14.515 ] 00:09:14.515 } 00:09:14.515 [2024-11-28 11:39:44.527717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.515 [2024-11-28 11:39:44.555720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.515 [2024-11-28 11:39:44.607594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.774 [2024-11-28 11:39:44.666672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.774  [2024-11-28T11:39:45.158Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:15.032 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:15.032 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:15.033 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:15.033 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:15.033 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:15.033 [2024-11-28 11:39:45.101957] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:15.033 [2024-11-28 11:39:45.102055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75141 ] 00:09:15.033 { 00:09:15.033 "subsystems": [ 00:09:15.033 { 00:09:15.033 "subsystem": "bdev", 00:09:15.033 "config": [ 00:09:15.033 { 00:09:15.033 "params": { 00:09:15.033 "trtype": "pcie", 00:09:15.033 "traddr": "0000:00:10.0", 00:09:15.033 "name": "Nvme0" 00:09:15.033 }, 00:09:15.033 "method": "bdev_nvme_attach_controller" 00:09:15.033 }, 00:09:15.033 { 00:09:15.033 "params": { 00:09:15.033 "trtype": "pcie", 00:09:15.033 "traddr": "0000:00:11.0", 00:09:15.033 "name": "Nvme1" 00:09:15.033 }, 00:09:15.033 "method": "bdev_nvme_attach_controller" 00:09:15.033 }, 00:09:15.033 { 00:09:15.033 "method": "bdev_wait_for_examine" 00:09:15.033 } 00:09:15.033 ] 00:09:15.033 } 00:09:15.033 ] 00:09:15.033 } 00:09:15.290 [2024-11-28 11:39:45.227364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.290 [2024-11-28 11:39:45.258079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.290 [2024-11-28 11:39:45.304628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.290 [2024-11-28 11:39:45.365146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.548  [2024-11-28T11:39:45.932Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:09:15.806 00:09:15.806 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:15.806 00:09:15.806 real 0m7.320s 00:09:15.806 user 0m5.252s 00:09:15.806 sys 0m3.645s 00:09:15.806 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.806 11:39:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 ************************************ 00:09:15.806 END TEST spdk_dd_bdev_to_bdev 00:09:15.806 ************************************ 00:09:15.806 11:39:45 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:15.806 11:39:45 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:15.806 11:39:45 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.806 11:39:45 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.806 11:39:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:15.806 ************************************ 00:09:15.806 START TEST spdk_dd_uring 00:09:15.806 ************************************ 00:09:15.806 11:39:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:15.806 * Looking for test storage... 00:09:15.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:15.806 11:39:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.806 11:39:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.806 11:39:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.065 --rc genhtml_branch_coverage=1 00:09:16.065 --rc genhtml_function_coverage=1 00:09:16.065 --rc genhtml_legend=1 00:09:16.065 --rc geninfo_all_blocks=1 00:09:16.065 --rc geninfo_unexecuted_blocks=1 00:09:16.065 00:09:16.065 ' 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.065 --rc genhtml_branch_coverage=1 00:09:16.065 --rc genhtml_function_coverage=1 00:09:16.065 --rc genhtml_legend=1 00:09:16.065 --rc geninfo_all_blocks=1 00:09:16.065 --rc geninfo_unexecuted_blocks=1 00:09:16.065 00:09:16.065 ' 00:09:16.065 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.065 --rc genhtml_branch_coverage=1 00:09:16.066 --rc genhtml_function_coverage=1 00:09:16.066 --rc genhtml_legend=1 00:09:16.066 --rc geninfo_all_blocks=1 00:09:16.066 --rc geninfo_unexecuted_blocks=1 00:09:16.066 00:09:16.066 ' 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.066 --rc genhtml_branch_coverage=1 00:09:16.066 --rc genhtml_function_coverage=1 00:09:16.066 --rc genhtml_legend=1 00:09:16.066 --rc geninfo_all_blocks=1 00:09:16.066 --rc geninfo_unexecuted_blocks=1 00:09:16.066 00:09:16.066 ' 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:16.066 ************************************ 00:09:16.066 START TEST dd_uring_copy 00:09:16.066 ************************************ 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=mw6cq1bzd78vydu8n23qmiyw56daw8eqks3eixoy6wpdf0oxvko6c64c1x9gxaje9lzcdw3kvf6t391zdtqoku9y0b2s12nl7h8t8yddywstq31u1tgyo1oh67wj56zh7sxgy3bknfe82gq5v30knrabpoed9k1jphfbi2swhezi4p5j29gtkulpgt4e68dp0ueolx3e069qlp7gufd2gk1iwkbi68uxw3zyg5kk38s9svxsom63h2ygewlvwuv2ww5ndqq8hptpm2a6b00q20m03s1s3ikscokg4rweydh1h2dq03kd6oyc4pjuqv16sc168ulq3jd4yiccfkxusu97jd2asxzszsgp9zuyrdhrmyrbeyf2mfgwqu18888bknp1zsh2vwmdw4maa8idjbt64fwpkto15u6l46nkzdehzl5lfsyu2m6jnj9ct0qmr6tv6y7suk3okm2dvge0a0achaklgyzq94y94v8g6v7osh4uiyidytltqdrclja94v4uvjjlz9j07zjn3w5sxntvve40yyanjk9qdot4npgu6yo6p73jbpocl04umehwcx8210zp500sul6md57t2p52l7vmrisq2q7h6b3ltlqzhcyaxyf92q794pkco4lt3alixxfj208k2dmarucv0s2e38udvdhsj2215ticfosehefg09awczmuyxarvvyy98hzbnm917mgwy7uaf7alqufbi9oh9n5vhkdzbslee45r0vtbbh9n05fk24vchit3s6tnjkj59g4lm8xxt7w2otdgv0wsu0l5z8m0rp66r0tt668mjcj5lr24o1ua6eifj7ccb7f02og7tb77wc3vfocpiup0jn5dt9qcsz4rrzm8ydywle6unwru3e3jcy79ve8beyznk1dwcnm5jv79apgb69ac9no1i3qxquunc3013u20nr4v9bchntaaffb5dln6ctnlghober09fbtl2k1qvjmala63a6njsywe41iyww883n0zolhn4htvsk9 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo mw6cq1bzd78vydu8n23qmiyw56daw8eqks3eixoy6wpdf0oxvko6c64c1x9gxaje9lzcdw3kvf6t391zdtqoku9y0b2s12nl7h8t8yddywstq31u1tgyo1oh67wj56zh7sxgy3bknfe82gq5v30knrabpoed9k1jphfbi2swhezi4p5j29gtkulpgt4e68dp0ueolx3e069qlp7gufd2gk1iwkbi68uxw3zyg5kk38s9svxsom63h2ygewlvwuv2ww5ndqq8hptpm2a6b00q20m03s1s3ikscokg4rweydh1h2dq03kd6oyc4pjuqv16sc168ulq3jd4yiccfkxusu97jd2asxzszsgp9zuyrdhrmyrbeyf2mfgwqu18888bknp1zsh2vwmdw4maa8idjbt64fwpkto15u6l46nkzdehzl5lfsyu2m6jnj9ct0qmr6tv6y7suk3okm2dvge0a0achaklgyzq94y94v8g6v7osh4uiyidytltqdrclja94v4uvjjlz9j07zjn3w5sxntvve40yyanjk9qdot4npgu6yo6p73jbpocl04umehwcx8210zp500sul6md57t2p52l7vmrisq2q7h6b3ltlqzhcyaxyf92q794pkco4lt3alixxfj208k2dmarucv0s2e38udvdhsj2215ticfosehefg09awczmuyxarvvyy98hzbnm917mgwy7uaf7alqufbi9oh9n5vhkdzbslee45r0vtbbh9n05fk24vchit3s6tnjkj59g4lm8xxt7w2otdgv0wsu0l5z8m0rp66r0tt668mjcj5lr24o1ua6eifj7ccb7f02og7tb77wc3vfocpiup0jn5dt9qcsz4rrzm8ydywle6unwru3e3jcy79ve8beyznk1dwcnm5jv79apgb69ac9no1i3qxquunc3013u20nr4v9bchntaaffb5dln6ctnlghober09fbtl2k1qvjmala63a6njsywe41iyww883n0zolhn4htvsk9 00:09:16.066 11:39:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:16.066 [2024-11-28 11:39:46.149532] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:16.066 [2024-11-28 11:39:46.149655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75219 ] 00:09:16.346 [2024-11-28 11:39:46.275869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:16.346 [2024-11-28 11:39:46.310072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.346 [2024-11-28 11:39:46.359163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.346 [2024-11-28 11:39:46.419147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.283  [2024-11-28T11:39:47.669Z] Copying: 511/511 [MB] (average 1336 MBps) 00:09:17.543 00:09:17.543 11:39:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:17.543 11:39:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:17.543 11:39:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:17.543 11:39:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:17.543 [2024-11-28 11:39:47.504645] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:17.543 [2024-11-28 11:39:47.504727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75235 ] 00:09:17.543 { 00:09:17.543 "subsystems": [ 00:09:17.543 { 00:09:17.543 "subsystem": "bdev", 00:09:17.543 "config": [ 00:09:17.543 { 00:09:17.543 "params": { 00:09:17.543 "block_size": 512, 00:09:17.543 "num_blocks": 1048576, 00:09:17.543 "name": "malloc0" 00:09:17.543 }, 00:09:17.543 "method": "bdev_malloc_create" 00:09:17.543 }, 00:09:17.543 { 00:09:17.543 "params": { 00:09:17.543 "filename": "/dev/zram1", 00:09:17.543 "name": "uring0" 00:09:17.543 }, 00:09:17.543 "method": "bdev_uring_create" 00:09:17.543 }, 00:09:17.543 { 00:09:17.543 "method": "bdev_wait_for_examine" 00:09:17.543 } 00:09:17.543 ] 00:09:17.543 } 00:09:17.543 ] 00:09:17.543 } 00:09:17.543 [2024-11-28 11:39:47.628109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:17.543 [2024-11-28 11:39:47.657053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.802 [2024-11-28 11:39:47.710787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.802 [2024-11-28 11:39:47.769518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.180  [2024-11-28T11:39:50.241Z] Copying: 176/512 [MB] (176 MBps) [2024-11-28T11:39:50.809Z] Copying: 375/512 [MB] (199 MBps) [2024-11-28T11:39:51.068Z] Copying: 512/512 [MB] (average 191 MBps) 00:09:20.942 00:09:21.202 11:39:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:21.202 11:39:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:21.202 11:39:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:21.202 11:39:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:21.202 { 00:09:21.202 "subsystems": [ 00:09:21.202 { 00:09:21.202 "subsystem": "bdev", 00:09:21.202 "config": [ 00:09:21.202 { 00:09:21.202 "params": { 00:09:21.202 "block_size": 512, 00:09:21.202 "num_blocks": 1048576, 00:09:21.202 "name": "malloc0" 00:09:21.202 }, 00:09:21.202 "method": "bdev_malloc_create" 00:09:21.202 }, 00:09:21.202 { 00:09:21.202 "params": { 00:09:21.202 "filename": "/dev/zram1", 00:09:21.202 "name": "uring0" 00:09:21.202 }, 00:09:21.202 "method": "bdev_uring_create" 00:09:21.202 }, 00:09:21.202 { 00:09:21.202 "method": "bdev_wait_for_examine" 00:09:21.202 } 00:09:21.202 ] 00:09:21.202 } 00:09:21.202 ] 00:09:21.202 } 00:09:21.202 [2024-11-28 11:39:51.130234] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:21.202 [2024-11-28 11:39:51.130379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75290 ] 00:09:21.202 [2024-11-28 11:39:51.256270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.202 [2024-11-28 11:39:51.286508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.461 [2024-11-28 11:39:51.332138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.461 [2024-11-28 11:39:51.394499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.839  [2024-11-28T11:39:53.902Z] Copying: 155/512 [MB] (155 MBps) [2024-11-28T11:39:54.839Z] Copying: 311/512 [MB] (155 MBps) [2024-11-28T11:39:55.099Z] Copying: 469/512 [MB] (158 MBps) [2024-11-28T11:39:55.358Z] Copying: 512/512 [MB] (average 155 MBps) 00:09:25.232 00:09:25.232 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:25.232 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ mw6cq1bzd78vydu8n23qmiyw56daw8eqks3eixoy6wpdf0oxvko6c64c1x9gxaje9lzcdw3kvf6t391zdtqoku9y0b2s12nl7h8t8yddywstq31u1tgyo1oh67wj56zh7sxgy3bknfe82gq5v30knrabpoed9k1jphfbi2swhezi4p5j29gtkulpgt4e68dp0ueolx3e069qlp7gufd2gk1iwkbi68uxw3zyg5kk38s9svxsom63h2ygewlvwuv2ww5ndqq8hptpm2a6b00q20m03s1s3ikscokg4rweydh1h2dq03kd6oyc4pjuqv16sc168ulq3jd4yiccfkxusu97jd2asxzszsgp9zuyrdhrmyrbeyf2mfgwqu18888bknp1zsh2vwmdw4maa8idjbt64fwpkto15u6l46nkzdehzl5lfsyu2m6jnj9ct0qmr6tv6y7suk3okm2dvge0a0achaklgyzq94y94v8g6v7osh4uiyidytltqdrclja94v4uvjjlz9j07zjn3w5sxntvve40yyanjk9qdot4npgu6yo6p73jbpocl04umehwcx8210zp500sul6md57t2p52l7vmrisq2q7h6b3ltlqzhcyaxyf92q794pkco4lt3alixxfj208k2dmarucv0s2e38udvdhsj2215ticfosehefg09awczmuyxarvvyy98hzbnm917mgwy7uaf7alqufbi9oh9n5vhkdzbslee45r0vtbbh9n05fk24vchit3s6tnjkj59g4lm8xxt7w2otdgv0wsu0l5z8m0rp66r0tt668mjcj5lr24o1ua6eifj7ccb7f02og7tb77wc3vfocpiup0jn5dt9qcsz4rrzm8ydywle6unwru3e3jcy79ve8beyznk1dwcnm5jv79apgb69ac9no1i3qxquunc3013u20nr4v9bchntaaffb5dln6ctnlghober09fbtl2k1qvjmala63a6njsywe41iyww883n0zolhn4htvsk9 == \m\w\6\c\q\1\b\z\d\7\8\v\y\d\u\8\n\2\3\q\m\i\y\w\5\6\d\a\w\8\e\q\k\s\3\e\i\x\o\y\6\w\p\d\f\0\o\x\v\k\o\6\c\6\4\c\1\x\9\g\x\a\j\e\9\l\z\c\d\w\3\k\v\f\6\t\3\9\1\z\d\t\q\o\k\u\9\y\0\b\2\s\1\2\n\l\7\h\8\t\8\y\d\d\y\w\s\t\q\3\1\u\1\t\g\y\o\1\o\h\6\7\w\j\5\6\z\h\7\s\x\g\y\3\b\k\n\f\e\8\2\g\q\5\v\3\0\k\n\r\a\b\p\o\e\d\9\k\1\j\p\h\f\b\i\2\s\w\h\e\z\i\4\p\5\j\2\9\g\t\k\u\l\p\g\t\4\e\6\8\d\p\0\u\e\o\l\x\3\e\0\6\9\q\l\p\7\g\u\f\d\2\g\k\1\i\w\k\b\i\6\8\u\x\w\3\z\y\g\5\k\k\3\8\s\9\s\v\x\s\o\m\6\3\h\2\y\g\e\w\l\v\w\u\v\2\w\w\5\n\d\q\q\8\h\p\t\p\m\2\a\6\b\0\0\q\2\0\m\0\3\s\1\s\3\i\k\s\c\o\k\g\4\r\w\e\y\d\h\1\h\2\d\q\0\3\k\d\6\o\y\c\4\p\j\u\q\v\1\6\s\c\1\6\8\u\l\q\3\j\d\4\y\i\c\c\f\k\x\u\s\u\9\7\j\d\2\a\s\x\z\s\z\s\g\p\9\z\u\y\r\d\h\r\m\y\r\b\e\y\f\2\m\f\g\w\q\u\1\8\8\8\8\b\k\n\p\1\z\s\h\2\v\w\m\d\w\4\m\a\a\8\i\d\j\b\t\6\4\f\w\p\k\t\o\1\5\u\6\l\4\6\n\k\z\d\e\h\z\l\5\l\f\s\y\u\2\m\6\j\n\j\9\c\t\0\q\m\r\6\t\v\6\y\7\s\u\k\3\o\k\m\2\d\v\g\e\0\a\0\a\c\h\a\k\l\g\y\z\q\9\4\y\9\4\v\8\g\6\v\7\o\s\h\4\u\i\y\i\d\y\t\l\t\q\d\r\c\l\j\a\9\4\v\4\u\v\j\j\l\z\9\j\0\7\z\j\n\3\w\5\s\x\n\t\v\v\e\4\0\y\y\a\n\j\k\9\q\d\o\t\4\n\p\g\u\6\y\o\6\p\7\3\j\b\p\o\c\l\0\4\u\m\e\h\w\c\x\8\2\1\0\z\p\5\0\0\s\u\l\6\m\d\5\7\t\2\p\5\2\l\7\v\m\r\i\s\q\2\q\7\h\6\b\3\l\t\l\q\z\h\c\y\a\x\y\f\9\2\q\7\9\4\p\k\c\o\4\l\t\3\a\l\i\x\x\f\j\2\0\8\k\2\d\m\a\r\u\c\v\0\s\2\e\3\8\u\d\v\d\h\s\j\2\2\1\5\t\i\c\f\o\s\e\h\e\f\g\0\9\a\w\c\z\m\u\y\x\a\r\v\v\y\y\9\8\h\z\b\n\m\9\1\7\m\g\w\y\7\u\a\f\7\a\l\q\u\f\b\i\9\o\h\9\n\5\v\h\k\d\z\b\s\l\e\e\4\5\r\0\v\t\b\b\h\9\n\0\5\f\k\2\4\v\c\h\i\t\3\s\6\t\n\j\k\j\5\9\g\4\l\m\8\x\x\t\7\w\2\o\t\d\g\v\0\w\s\u\0\l\5\z\8\m\0\r\p\6\6\r\0\t\t\6\6\8\m\j\c\j\5\l\r\2\4\o\1\u\a\6\e\i\f\j\7\c\c\b\7\f\0\2\o\g\7\t\b\7\7\w\c\3\v\f\o\c\p\i\u\p\0\j\n\5\d\t\9\q\c\s\z\4\r\r\z\m\8\y\d\y\w\l\e\6\u\n\w\r\u\3\e\3\j\c\y\7\9\v\e\8\b\e\y\z\n\k\1\d\w\c\n\m\5\j\v\7\9\a\p\g\b\6\9\a\c\9\n\o\1\i\3\q\x\q\u\u\n\c\3\0\1\3\u\2\0\n\r\4\v\9\b\c\h\n\t\a\a\f\f\b\5\d\l\n\6\c\t\n\l\g\h\o\b\e\r\0\9\f\b\t\l\2\k\1\q\v\j\m\a\l\a\6\3\a\6\n\j\s\y\w\e\4\1\i\y\w\w\8\8\3\n\0\z\o\l\h\n\4\h\t\v\s\k\9 ]] 00:09:25.232 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:25.232 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ mw6cq1bzd78vydu8n23qmiyw56daw8eqks3eixoy6wpdf0oxvko6c64c1x9gxaje9lzcdw3kvf6t391zdtqoku9y0b2s12nl7h8t8yddywstq31u1tgyo1oh67wj56zh7sxgy3bknfe82gq5v30knrabpoed9k1jphfbi2swhezi4p5j29gtkulpgt4e68dp0ueolx3e069qlp7gufd2gk1iwkbi68uxw3zyg5kk38s9svxsom63h2ygewlvwuv2ww5ndqq8hptpm2a6b00q20m03s1s3ikscokg4rweydh1h2dq03kd6oyc4pjuqv16sc168ulq3jd4yiccfkxusu97jd2asxzszsgp9zuyrdhrmyrbeyf2mfgwqu18888bknp1zsh2vwmdw4maa8idjbt64fwpkto15u6l46nkzdehzl5lfsyu2m6jnj9ct0qmr6tv6y7suk3okm2dvge0a0achaklgyzq94y94v8g6v7osh4uiyidytltqdrclja94v4uvjjlz9j07zjn3w5sxntvve40yyanjk9qdot4npgu6yo6p73jbpocl04umehwcx8210zp500sul6md57t2p52l7vmrisq2q7h6b3ltlqzhcyaxyf92q794pkco4lt3alixxfj208k2dmarucv0s2e38udvdhsj2215ticfosehefg09awczmuyxarvvyy98hzbnm917mgwy7uaf7alqufbi9oh9n5vhkdzbslee45r0vtbbh9n05fk24vchit3s6tnjkj59g4lm8xxt7w2otdgv0wsu0l5z8m0rp66r0tt668mjcj5lr24o1ua6eifj7ccb7f02og7tb77wc3vfocpiup0jn5dt9qcsz4rrzm8ydywle6unwru3e3jcy79ve8beyznk1dwcnm5jv79apgb69ac9no1i3qxquunc3013u20nr4v9bchntaaffb5dln6ctnlghober09fbtl2k1qvjmala63a6njsywe41iyww883n0zolhn4htvsk9 == \m\w\6\c\q\1\b\z\d\7\8\v\y\d\u\8\n\2\3\q\m\i\y\w\5\6\d\a\w\8\e\q\k\s\3\e\i\x\o\y\6\w\p\d\f\0\o\x\v\k\o\6\c\6\4\c\1\x\9\g\x\a\j\e\9\l\z\c\d\w\3\k\v\f\6\t\3\9\1\z\d\t\q\o\k\u\9\y\0\b\2\s\1\2\n\l\7\h\8\t\8\y\d\d\y\w\s\t\q\3\1\u\1\t\g\y\o\1\o\h\6\7\w\j\5\6\z\h\7\s\x\g\y\3\b\k\n\f\e\8\2\g\q\5\v\3\0\k\n\r\a\b\p\o\e\d\9\k\1\j\p\h\f\b\i\2\s\w\h\e\z\i\4\p\5\j\2\9\g\t\k\u\l\p\g\t\4\e\6\8\d\p\0\u\e\o\l\x\3\e\0\6\9\q\l\p\7\g\u\f\d\2\g\k\1\i\w\k\b\i\6\8\u\x\w\3\z\y\g\5\k\k\3\8\s\9\s\v\x\s\o\m\6\3\h\2\y\g\e\w\l\v\w\u\v\2\w\w\5\n\d\q\q\8\h\p\t\p\m\2\a\6\b\0\0\q\2\0\m\0\3\s\1\s\3\i\k\s\c\o\k\g\4\r\w\e\y\d\h\1\h\2\d\q\0\3\k\d\6\o\y\c\4\p\j\u\q\v\1\6\s\c\1\6\8\u\l\q\3\j\d\4\y\i\c\c\f\k\x\u\s\u\9\7\j\d\2\a\s\x\z\s\z\s\g\p\9\z\u\y\r\d\h\r\m\y\r\b\e\y\f\2\m\f\g\w\q\u\1\8\8\8\8\b\k\n\p\1\z\s\h\2\v\w\m\d\w\4\m\a\a\8\i\d\j\b\t\6\4\f\w\p\k\t\o\1\5\u\6\l\4\6\n\k\z\d\e\h\z\l\5\l\f\s\y\u\2\m\6\j\n\j\9\c\t\0\q\m\r\6\t\v\6\y\7\s\u\k\3\o\k\m\2\d\v\g\e\0\a\0\a\c\h\a\k\l\g\y\z\q\9\4\y\9\4\v\8\g\6\v\7\o\s\h\4\u\i\y\i\d\y\t\l\t\q\d\r\c\l\j\a\9\4\v\4\u\v\j\j\l\z\9\j\0\7\z\j\n\3\w\5\s\x\n\t\v\v\e\4\0\y\y\a\n\j\k\9\q\d\o\t\4\n\p\g\u\6\y\o\6\p\7\3\j\b\p\o\c\l\0\4\u\m\e\h\w\c\x\8\2\1\0\z\p\5\0\0\s\u\l\6\m\d\5\7\t\2\p\5\2\l\7\v\m\r\i\s\q\2\q\7\h\6\b\3\l\t\l\q\z\h\c\y\a\x\y\f\9\2\q\7\9\4\p\k\c\o\4\l\t\3\a\l\i\x\x\f\j\2\0\8\k\2\d\m\a\r\u\c\v\0\s\2\e\3\8\u\d\v\d\h\s\j\2\2\1\5\t\i\c\f\o\s\e\h\e\f\g\0\9\a\w\c\z\m\u\y\x\a\r\v\v\y\y\9\8\h\z\b\n\m\9\1\7\m\g\w\y\7\u\a\f\7\a\l\q\u\f\b\i\9\o\h\9\n\5\v\h\k\d\z\b\s\l\e\e\4\5\r\0\v\t\b\b\h\9\n\0\5\f\k\2\4\v\c\h\i\t\3\s\6\t\n\j\k\j\5\9\g\4\l\m\8\x\x\t\7\w\2\o\t\d\g\v\0\w\s\u\0\l\5\z\8\m\0\r\p\6\6\r\0\t\t\6\6\8\m\j\c\j\5\l\r\2\4\o\1\u\a\6\e\i\f\j\7\c\c\b\7\f\0\2\o\g\7\t\b\7\7\w\c\3\v\f\o\c\p\i\u\p\0\j\n\5\d\t\9\q\c\s\z\4\r\r\z\m\8\y\d\y\w\l\e\6\u\n\w\r\u\3\e\3\j\c\y\7\9\v\e\8\b\e\y\z\n\k\1\d\w\c\n\m\5\j\v\7\9\a\p\g\b\6\9\a\c\9\n\o\1\i\3\q\x\q\u\u\n\c\3\0\1\3\u\2\0\n\r\4\v\9\b\c\h\n\t\a\a\f\f\b\5\d\l\n\6\c\t\n\l\g\h\o\b\e\r\0\9\f\b\t\l\2\k\1\q\v\j\m\a\l\a\6\3\a\6\n\j\s\y\w\e\4\1\i\y\w\w\8\8\3\n\0\z\o\l\h\n\4\h\t\v\s\k\9 ]] 00:09:25.232 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:25.800 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:25.800 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:25.800 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:25.800 11:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.800 [2024-11-28 11:39:55.750362] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:25.800 [2024-11-28 11:39:55.750471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75359 ] 00:09:25.800 { 00:09:25.800 "subsystems": [ 00:09:25.800 { 00:09:25.800 "subsystem": "bdev", 00:09:25.800 "config": [ 00:09:25.800 { 00:09:25.800 "params": { 00:09:25.800 "block_size": 512, 00:09:25.800 "num_blocks": 1048576, 00:09:25.800 "name": "malloc0" 00:09:25.800 }, 00:09:25.800 "method": "bdev_malloc_create" 00:09:25.800 }, 00:09:25.800 { 00:09:25.800 "params": { 00:09:25.800 "filename": "/dev/zram1", 00:09:25.800 "name": "uring0" 00:09:25.800 }, 00:09:25.800 "method": "bdev_uring_create" 00:09:25.800 }, 00:09:25.800 { 00:09:25.800 "method": "bdev_wait_for_examine" 00:09:25.800 } 00:09:25.800 ] 00:09:25.800 } 00:09:25.800 ] 00:09:25.800 } 00:09:25.800 [2024-11-28 11:39:55.871607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:25.800 [2024-11-28 11:39:55.896334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.061 [2024-11-28 11:39:55.951637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.061 [2024-11-28 11:39:56.012043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.443  [2024-11-28T11:39:58.507Z] Copying: 152/512 [MB] (152 MBps) [2024-11-28T11:39:59.441Z] Copying: 304/512 [MB] (151 MBps) [2024-11-28T11:39:59.700Z] Copying: 462/512 [MB] (158 MBps) [2024-11-28T11:40:00.269Z] Copying: 512/512 [MB] (average 153 MBps) 00:09:30.143 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:30.143 11:39:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.143 [2024-11-28 11:40:00.044785] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:30.143 [2024-11-28 11:40:00.044889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75421 ] 00:09:30.143 { 00:09:30.143 "subsystems": [ 00:09:30.143 { 00:09:30.143 "subsystem": "bdev", 00:09:30.143 "config": [ 00:09:30.143 { 00:09:30.143 "params": { 00:09:30.143 "block_size": 512, 00:09:30.143 "num_blocks": 1048576, 00:09:30.143 "name": "malloc0" 00:09:30.143 }, 00:09:30.143 "method": "bdev_malloc_create" 00:09:30.143 }, 00:09:30.143 { 00:09:30.143 "params": { 00:09:30.143 "filename": "/dev/zram1", 00:09:30.143 "name": "uring0" 00:09:30.143 }, 00:09:30.143 "method": "bdev_uring_create" 00:09:30.143 }, 00:09:30.143 { 00:09:30.144 "params": { 00:09:30.144 "name": "uring0" 00:09:30.144 }, 00:09:30.144 "method": "bdev_uring_delete" 00:09:30.144 }, 00:09:30.144 { 00:09:30.144 "method": "bdev_wait_for_examine" 00:09:30.144 } 00:09:30.144 ] 00:09:30.144 } 00:09:30.144 ] 00:09:30.144 } 00:09:30.144 [2024-11-28 11:40:00.171494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:30.144 [2024-11-28 11:40:00.198834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.144 [2024-11-28 11:40:00.249947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.403 [2024-11-28 11:40:00.307761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.403  [2024-11-28T11:40:01.097Z] Copying: 0/0 [B] (average 0 Bps) 00:09:30.971 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.971 11:40:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:30.971 [2024-11-28 11:40:00.974502] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:30.971 [2024-11-28 11:40:00.974619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75444 ] 00:09:30.971 { 00:09:30.971 "subsystems": [ 00:09:30.971 { 00:09:30.971 "subsystem": "bdev", 00:09:30.971 "config": [ 00:09:30.971 { 00:09:30.971 "params": { 00:09:30.971 "block_size": 512, 00:09:30.971 "num_blocks": 1048576, 00:09:30.971 "name": "malloc0" 00:09:30.971 }, 00:09:30.971 "method": "bdev_malloc_create" 00:09:30.971 }, 00:09:30.971 { 00:09:30.971 "params": { 00:09:30.971 "filename": "/dev/zram1", 00:09:30.971 "name": "uring0" 00:09:30.971 }, 00:09:30.971 "method": "bdev_uring_create" 00:09:30.971 }, 00:09:30.971 { 00:09:30.971 "params": { 00:09:30.971 "name": "uring0" 00:09:30.971 }, 00:09:30.971 "method": "bdev_uring_delete" 00:09:30.971 }, 00:09:30.971 { 00:09:30.971 "method": "bdev_wait_for_examine" 00:09:30.971 } 00:09:30.971 ] 00:09:30.971 } 00:09:30.971 ] 00:09:30.971 } 00:09:31.230 [2024-11-28 11:40:01.101708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:31.230 [2024-11-28 11:40:01.130710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.230 [2024-11-28 11:40:01.184405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.230 [2024-11-28 11:40:01.245018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.490 [2024-11-28 11:40:01.463935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:31.490 [2024-11-28 11:40:01.464003] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:31.490 [2024-11-28 11:40:01.464057] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:31.490 [2024-11-28 11:40:01.464083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.749 [2024-11-28 11:40:01.809271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:31.749 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:32.007 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:32.007 11:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:32.265 00:09:32.265 real 0m16.143s 00:09:32.265 user 0m10.826s 00:09:32.265 sys 0m13.827s 00:09:32.265 11:40:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.265 11:40:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.265 ************************************ 00:09:32.265 END TEST dd_uring_copy 00:09:32.265 ************************************ 00:09:32.265 00:09:32.265 real 0m16.424s 00:09:32.265 user 0m10.999s 00:09:32.265 sys 0m13.941s 00:09:32.265 11:40:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.265 11:40:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:32.265 ************************************ 00:09:32.265 END TEST spdk_dd_uring 00:09:32.265 ************************************ 00:09:32.265 11:40:02 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:32.265 11:40:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.265 11:40:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.265 11:40:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:32.265 ************************************ 00:09:32.265 START TEST spdk_dd_sparse 00:09:32.265 ************************************ 00:09:32.265 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:32.265 * Looking for test storage... 00:09:32.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:32.265 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.525 --rc genhtml_branch_coverage=1 00:09:32.525 --rc genhtml_function_coverage=1 00:09:32.525 --rc genhtml_legend=1 00:09:32.525 --rc geninfo_all_blocks=1 00:09:32.525 --rc geninfo_unexecuted_blocks=1 00:09:32.525 00:09:32.525 ' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.525 --rc genhtml_branch_coverage=1 00:09:32.525 --rc genhtml_function_coverage=1 00:09:32.525 --rc genhtml_legend=1 00:09:32.525 --rc geninfo_all_blocks=1 00:09:32.525 --rc geninfo_unexecuted_blocks=1 00:09:32.525 00:09:32.525 ' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.525 --rc genhtml_branch_coverage=1 00:09:32.525 --rc genhtml_function_coverage=1 00:09:32.525 --rc genhtml_legend=1 00:09:32.525 --rc geninfo_all_blocks=1 00:09:32.525 --rc geninfo_unexecuted_blocks=1 00:09:32.525 00:09:32.525 ' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.525 --rc genhtml_branch_coverage=1 00:09:32.525 --rc genhtml_function_coverage=1 00:09:32.525 --rc genhtml_legend=1 00:09:32.525 --rc geninfo_all_blocks=1 00:09:32.525 --rc geninfo_unexecuted_blocks=1 00:09:32.525 00:09:32.525 ' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:32.525 1+0 records in 00:09:32.525 1+0 records out 00:09:32.525 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00749672 s, 559 MB/s 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:32.525 1+0 records in 00:09:32.525 1+0 records out 00:09:32.525 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00661914 s, 634 MB/s 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:32.525 1+0 records in 00:09:32.525 1+0 records out 00:09:32.525 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00566608 s, 740 MB/s 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:32.525 ************************************ 00:09:32.525 START TEST dd_sparse_file_to_file 00:09:32.525 ************************************ 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:32.525 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:32.526 11:40:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:32.526 [2024-11-28 11:40:02.584678] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:32.526 [2024-11-28 11:40:02.584829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75549 ] 00:09:32.526 { 00:09:32.526 "subsystems": [ 00:09:32.526 { 00:09:32.526 "subsystem": "bdev", 00:09:32.526 "config": [ 00:09:32.526 { 00:09:32.526 "params": { 00:09:32.526 "block_size": 4096, 00:09:32.526 "filename": "dd_sparse_aio_disk", 00:09:32.526 "name": "dd_aio" 00:09:32.526 }, 00:09:32.526 "method": "bdev_aio_create" 00:09:32.526 }, 00:09:32.526 { 00:09:32.526 "params": { 00:09:32.526 "lvs_name": "dd_lvstore", 00:09:32.526 "bdev_name": "dd_aio" 00:09:32.526 }, 00:09:32.526 "method": "bdev_lvol_create_lvstore" 00:09:32.526 }, 00:09:32.526 { 00:09:32.526 "method": "bdev_wait_for_examine" 00:09:32.526 } 00:09:32.526 ] 00:09:32.526 } 00:09:32.526 ] 00:09:32.526 } 00:09:32.785 [2024-11-28 11:40:02.711532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:32.786 [2024-11-28 11:40:02.736230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.786 [2024-11-28 11:40:02.789623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.786 [2024-11-28 11:40:02.848734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.044  [2024-11-28T11:40:03.170Z] Copying: 12/36 [MB] (average 800 MBps) 00:09:33.044 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:33.303 00:09:33.303 real 0m0.667s 00:09:33.303 user 0m0.398s 00:09:33.303 sys 0m0.368s 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:33.303 ************************************ 00:09:33.303 END TEST dd_sparse_file_to_file 00:09:33.303 ************************************ 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.303 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:33.304 ************************************ 00:09:33.304 START TEST dd_sparse_file_to_bdev 00:09:33.304 ************************************ 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:33.304 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:33.304 [2024-11-28 11:40:03.295441] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:33.304 [2024-11-28 11:40:03.295558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75593 ] 00:09:33.304 { 00:09:33.304 "subsystems": [ 00:09:33.304 { 00:09:33.304 "subsystem": "bdev", 00:09:33.304 "config": [ 00:09:33.304 { 00:09:33.304 "params": { 00:09:33.304 "block_size": 4096, 00:09:33.304 "filename": "dd_sparse_aio_disk", 00:09:33.304 "name": "dd_aio" 00:09:33.304 }, 00:09:33.304 "method": "bdev_aio_create" 00:09:33.304 }, 00:09:33.304 { 00:09:33.304 "params": { 00:09:33.304 "lvs_name": "dd_lvstore", 00:09:33.304 "lvol_name": "dd_lvol", 00:09:33.304 "size_in_mib": 36, 00:09:33.304 "thin_provision": true 00:09:33.304 }, 00:09:33.304 "method": "bdev_lvol_create" 00:09:33.304 }, 00:09:33.304 { 00:09:33.304 "method": "bdev_wait_for_examine" 00:09:33.304 } 00:09:33.304 ] 00:09:33.304 } 00:09:33.304 ] 00:09:33.304 } 00:09:33.304 [2024-11-28 11:40:03.421403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.563 [2024-11-28 11:40:03.450075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.564 [2024-11-28 11:40:03.498860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.564 [2024-11-28 11:40:03.558304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.564  [2024-11-28T11:40:03.949Z] Copying: 12/36 [MB] (average 461 MBps) 00:09:33.823 00:09:33.823 00:09:33.823 real 0m0.635s 00:09:33.823 user 0m0.390s 00:09:33.823 sys 0m0.366s 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:33.823 ************************************ 00:09:33.823 END TEST dd_sparse_file_to_bdev 00:09:33.823 ************************************ 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:33.823 ************************************ 00:09:33.823 START TEST dd_sparse_bdev_to_file 00:09:33.823 ************************************ 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:33.823 11:40:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:34.082 { 00:09:34.082 "subsystems": [ 00:09:34.082 { 00:09:34.082 "subsystem": "bdev", 00:09:34.082 "config": [ 00:09:34.082 { 00:09:34.082 "params": { 00:09:34.082 "block_size": 4096, 00:09:34.082 "filename": "dd_sparse_aio_disk", 00:09:34.082 "name": "dd_aio" 00:09:34.082 }, 00:09:34.082 "method": "bdev_aio_create" 00:09:34.082 }, 00:09:34.082 { 00:09:34.082 "method": "bdev_wait_for_examine" 00:09:34.082 } 00:09:34.082 ] 00:09:34.082 } 00:09:34.082 ] 00:09:34.082 } 00:09:34.082 [2024-11-28 11:40:03.984706] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:34.082 [2024-11-28 11:40:03.985358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75625 ] 00:09:34.082 [2024-11-28 11:40:04.112370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:34.082 [2024-11-28 11:40:04.140774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.082 [2024-11-28 11:40:04.189517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.341 [2024-11-28 11:40:04.251333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.341  [2024-11-28T11:40:04.726Z] Copying: 12/36 [MB] (average 923 MBps) 00:09:34.600 00:09:34.600 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:34.600 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:34.600 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:34.600 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:34.601 00:09:34.601 real 0m0.639s 00:09:34.601 user 0m0.383s 00:09:34.601 sys 0m0.370s 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.601 ************************************ 00:09:34.601 END TEST dd_sparse_bdev_to_file 00:09:34.601 ************************************ 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:34.601 ************************************ 00:09:34.601 END TEST spdk_dd_sparse 00:09:34.601 ************************************ 00:09:34.601 00:09:34.601 real 0m2.326s 00:09:34.601 user 0m1.342s 00:09:34.601 sys 0m1.318s 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.601 11:40:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:34.601 11:40:04 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:34.601 11:40:04 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.601 11:40:04 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.601 11:40:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:34.601 ************************************ 00:09:34.601 START TEST spdk_dd_negative 00:09:34.601 ************************************ 00:09:34.601 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:34.860 * Looking for test storage... 00:09:34.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.860 --rc genhtml_branch_coverage=1 00:09:34.860 --rc genhtml_function_coverage=1 00:09:34.860 --rc genhtml_legend=1 00:09:34.860 --rc geninfo_all_blocks=1 00:09:34.860 --rc geninfo_unexecuted_blocks=1 00:09:34.860 00:09:34.860 ' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.860 --rc genhtml_branch_coverage=1 00:09:34.860 --rc genhtml_function_coverage=1 00:09:34.860 --rc genhtml_legend=1 00:09:34.860 --rc geninfo_all_blocks=1 00:09:34.860 --rc geninfo_unexecuted_blocks=1 00:09:34.860 00:09:34.860 ' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.860 --rc genhtml_branch_coverage=1 00:09:34.860 --rc genhtml_function_coverage=1 00:09:34.860 --rc genhtml_legend=1 00:09:34.860 --rc geninfo_all_blocks=1 00:09:34.860 --rc geninfo_unexecuted_blocks=1 00:09:34.860 00:09:34.860 ' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.860 --rc genhtml_branch_coverage=1 00:09:34.860 --rc genhtml_function_coverage=1 00:09:34.860 --rc genhtml_legend=1 00:09:34.860 --rc geninfo_all_blocks=1 00:09:34.860 --rc geninfo_unexecuted_blocks=1 00:09:34.860 00:09:34.860 ' 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.860 11:40:04 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.861 ************************************ 00:09:34.861 START TEST dd_invalid_arguments 00:09:34.861 ************************************ 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.861 11:40:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:35.180 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:35.180 00:09:35.180 CPU options: 00:09:35.180 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:35.180 (like [0,1,10]) 00:09:35.180 --lcores lcore to CPU mapping list. The list is in the format: 00:09:35.180 [<,lcores[@CPUs]>...] 00:09:35.180 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:35.180 Within the group, '-' is used for range separator, 00:09:35.180 ',' is used for single number separator. 00:09:35.180 '( )' can be omitted for single element group, 00:09:35.180 '@' can be omitted if cpus and lcores have the same value 00:09:35.180 --disable-cpumask-locks Disable CPU core lock files. 00:09:35.180 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:35.180 pollers in the app support interrupt mode) 00:09:35.180 -p, --main-core main (primary) core for DPDK 00:09:35.180 00:09:35.180 Configuration options: 00:09:35.180 -c, --config, --json JSON config file 00:09:35.180 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:35.180 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:35.180 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:35.180 --rpcs-allowed comma-separated list of permitted RPCS 00:09:35.180 --json-ignore-init-errors don't exit on invalid config entry 00:09:35.180 00:09:35.180 Memory options: 00:09:35.180 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:35.180 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:35.180 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:35.180 -R, --huge-unlink unlink huge files after initialization 00:09:35.180 -n, --mem-channels number of memory channels used for DPDK 00:09:35.180 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:35.180 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:35.180 --no-huge run without using hugepages 00:09:35.180 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:35.180 -i, --shm-id shared memory ID (optional) 00:09:35.180 -g, --single-file-segments force creating just one hugetlbfs file 00:09:35.180 00:09:35.180 PCI options: 00:09:35.180 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:35.180 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:35.180 -u, --no-pci disable PCI access 00:09:35.180 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:35.180 00:09:35.180 Log options: 00:09:35.180 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:35.180 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:35.180 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:35.180 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:35.180 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:35.180 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:35.180 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:35.180 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:35.180 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:35.180 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:35.180 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:35.180 --silence-noticelog disable notice level logging to stderr 00:09:35.180 00:09:35.180 Trace options: 00:09:35.180 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:35.180 setting 0 to disable trace (default 32768) 00:09:35.180 Tracepoints vary in size and can use more than one trace entry. 00:09:35.180 -e, --tpoint-group [:] 00:09:35.180 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:35.180 [2024-11-28 11:40:05.003114] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:35.180 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:35.180 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:35.180 bdev_raid, scheduler, all). 00:09:35.180 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:35.180 a tracepoint group. First tpoint inside a group can be enabled by 00:09:35.180 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:35.180 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:35.180 in /include/spdk_internal/trace_defs.h 00:09:35.180 00:09:35.180 Other options: 00:09:35.180 -h, --help show this usage 00:09:35.180 -v, --version print SPDK version 00:09:35.180 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:35.180 --env-context Opaque context for use of the env implementation 00:09:35.180 00:09:35.180 Application specific: 00:09:35.180 [--------- DD Options ---------] 00:09:35.180 --if Input file. Must specify either --if or --ib. 00:09:35.180 --ib Input bdev. Must specifier either --if or --ib 00:09:35.180 --of Output file. Must specify either --of or --ob. 00:09:35.180 --ob Output bdev. Must specify either --of or --ob. 00:09:35.180 --iflag Input file flags. 00:09:35.180 --oflag Output file flags. 00:09:35.180 --bs I/O unit size (default: 4096) 00:09:35.180 --qd Queue depth (default: 2) 00:09:35.180 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:35.180 --skip Skip this many I/O units at start of input. (default: 0) 00:09:35.180 --seek Skip this many I/O units at start of output. (default: 0) 00:09:35.180 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:35.180 --sparse Enable hole skipping in input target 00:09:35.180 Available iflag and oflag values: 00:09:35.180 append - append mode 00:09:35.180 direct - use direct I/O for data 00:09:35.180 directory - fail unless a directory 00:09:35.180 dsync - use synchronized I/O for data 00:09:35.180 noatime - do not update access time 00:09:35.180 noctty - do not assign controlling terminal from file 00:09:35.180 nofollow - do not follow symlinks 00:09:35.180 nonblock - use non-blocking I/O 00:09:35.180 sync - use synchronized I/O for data and metadata 00:09:35.180 ************************************ 00:09:35.180 END TEST dd_invalid_arguments 00:09:35.180 ************************************ 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.180 00:09:35.180 real 0m0.086s 00:09:35.180 user 0m0.045s 00:09:35.180 sys 0m0.040s 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.180 ************************************ 00:09:35.180 START TEST dd_double_input 00:09:35.180 ************************************ 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:35.181 [2024-11-28 11:40:05.138805] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.181 00:09:35.181 real 0m0.078s 00:09:35.181 user 0m0.044s 00:09:35.181 sys 0m0.033s 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.181 ************************************ 00:09:35.181 END TEST dd_double_input 00:09:35.181 ************************************ 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.181 ************************************ 00:09:35.181 START TEST dd_double_output 00:09:35.181 ************************************ 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.181 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:35.181 [2024-11-28 11:40:05.265297] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.455 ************************************ 00:09:35.455 END TEST dd_double_output 00:09:35.455 ************************************ 00:09:35.455 00:09:35.455 real 0m0.072s 00:09:35.455 user 0m0.040s 00:09:35.455 sys 0m0.030s 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.455 ************************************ 00:09:35.455 START TEST dd_no_input 00:09:35.455 ************************************ 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.455 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:35.455 [2024-11-28 11:40:05.394618] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.456 00:09:35.456 real 0m0.080s 00:09:35.456 user 0m0.054s 00:09:35.456 sys 0m0.025s 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:35.456 ************************************ 00:09:35.456 END TEST dd_no_input 00:09:35.456 ************************************ 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.456 ************************************ 00:09:35.456 START TEST dd_no_output 00:09:35.456 ************************************ 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:35.456 [2024-11-28 11:40:05.521483] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.456 00:09:35.456 real 0m0.076s 00:09:35.456 user 0m0.055s 00:09:35.456 sys 0m0.019s 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.456 ************************************ 00:09:35.456 END TEST dd_no_output 00:09:35.456 ************************************ 00:09:35.456 11:40:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:35.715 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:35.715 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.715 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.716 ************************************ 00:09:35.716 START TEST dd_wrong_blocksize 00:09:35.716 ************************************ 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:35.716 [2024-11-28 11:40:05.652471] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.716 00:09:35.716 real 0m0.081s 00:09:35.716 user 0m0.055s 00:09:35.716 sys 0m0.023s 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.716 ************************************ 00:09:35.716 END TEST dd_wrong_blocksize 00:09:35.716 ************************************ 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.716 ************************************ 00:09:35.716 START TEST dd_smaller_blocksize 00:09:35.716 ************************************ 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.716 11:40:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:35.716 [2024-11-28 11:40:05.784944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:35.716 [2024-11-28 11:40:05.785245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75857 ] 00:09:35.976 [2024-11-28 11:40:05.910253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:35.976 [2024-11-28 11:40:05.939603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.976 [2024-11-28 11:40:05.986183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.976 [2024-11-28 11:40:06.047309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.976 [2024-11-28 11:40:06.086603] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:35.976 [2024-11-28 11:40:06.086668] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.236 [2024-11-28 11:40:06.212278] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.236 00:09:36.236 real 0m0.552s 00:09:36.236 user 0m0.297s 00:09:36.236 sys 0m0.148s 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:36.236 ************************************ 00:09:36.236 END TEST dd_smaller_blocksize 00:09:36.236 ************************************ 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.236 ************************************ 00:09:36.236 START TEST dd_invalid_count 00:09:36.236 ************************************ 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.236 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:36.496 [2024-11-28 11:40:06.384713] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:36.496 ************************************ 00:09:36.496 END TEST dd_invalid_count 00:09:36.496 ************************************ 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.496 00:09:36.496 real 0m0.071s 00:09:36.496 user 0m0.036s 00:09:36.496 sys 0m0.034s 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.496 ************************************ 00:09:36.496 START TEST dd_invalid_oflag 00:09:36.496 ************************************ 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:36.496 [2024-11-28 11:40:06.509732] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.496 ************************************ 00:09:36.496 END TEST dd_invalid_oflag 00:09:36.496 ************************************ 00:09:36.496 00:09:36.496 real 0m0.078s 00:09:36.496 user 0m0.051s 00:09:36.496 sys 0m0.026s 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.496 ************************************ 00:09:36.496 START TEST dd_invalid_iflag 00:09:36.496 ************************************ 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.496 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:36.756 [2024-11-28 11:40:06.645328] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.756 00:09:36.756 real 0m0.079s 00:09:36.756 user 0m0.055s 00:09:36.756 sys 0m0.022s 00:09:36.756 ************************************ 00:09:36.756 END TEST dd_invalid_iflag 00:09:36.756 ************************************ 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.756 ************************************ 00:09:36.756 START TEST dd_unknown_flag 00:09:36.756 ************************************ 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.756 11:40:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:36.756 [2024-11-28 11:40:06.776356] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:36.756 [2024-11-28 11:40:06.776444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75949 ] 00:09:37.016 [2024-11-28 11:40:06.901418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:37.016 [2024-11-28 11:40:06.930370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.016 [2024-11-28 11:40:06.982831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.016 [2024-11-28 11:40:07.044554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.016 [2024-11-28 11:40:07.080595] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:37.016 [2024-11-28 11:40:07.080674] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.016 [2024-11-28 11:40:07.080744] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:37.016 [2024-11-28 11:40:07.080757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.016 [2024-11-28 11:40:07.081005] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:37.016 [2024-11-28 11:40:07.081020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.016 [2024-11-28 11:40:07.081106] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:37.016 [2024-11-28 11:40:07.081116] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:37.275 [2024-11-28 11:40:07.206455] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:37.275 ************************************ 00:09:37.275 END TEST dd_unknown_flag 00:09:37.275 ************************************ 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.275 00:09:37.275 real 0m0.549s 00:09:37.275 user 0m0.295s 00:09:37.275 sys 0m0.157s 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.275 ************************************ 00:09:37.275 START TEST dd_invalid_json 00:09:37.275 ************************************ 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.275 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:37.275 [2024-11-28 11:40:07.381842] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:37.275 [2024-11-28 11:40:07.381954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75978 ] 00:09:37.554 [2024-11-28 11:40:07.508198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:37.555 [2024-11-28 11:40:07.537251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.555 [2024-11-28 11:40:07.575097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.555 [2024-11-28 11:40:07.575201] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:37.555 [2024-11-28 11:40:07.575218] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:37.555 [2024-11-28 11:40:07.575237] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.555 [2024-11-28 11:40:07.575273] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:37.555 ************************************ 00:09:37.555 END TEST dd_invalid_json 00:09:37.555 ************************************ 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.555 00:09:37.555 real 0m0.309s 00:09:37.555 user 0m0.145s 00:09:37.555 sys 0m0.062s 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.555 11:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.816 ************************************ 00:09:37.816 START TEST dd_invalid_seek 00:09:37.816 ************************************ 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:37.816 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.817 11:40:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:37.817 { 00:09:37.817 "subsystems": [ 00:09:37.817 { 00:09:37.817 "subsystem": "bdev", 00:09:37.817 "config": [ 00:09:37.817 { 00:09:37.817 "params": { 00:09:37.817 "block_size": 512, 00:09:37.817 "num_blocks": 512, 00:09:37.817 "name": "malloc0" 00:09:37.817 }, 00:09:37.817 "method": "bdev_malloc_create" 00:09:37.817 }, 00:09:37.817 { 00:09:37.817 "params": { 00:09:37.817 "block_size": 512, 00:09:37.817 "num_blocks": 512, 00:09:37.817 "name": "malloc1" 00:09:37.817 }, 00:09:37.817 "method": "bdev_malloc_create" 00:09:37.817 }, 00:09:37.817 { 00:09:37.817 "method": "bdev_wait_for_examine" 00:09:37.817 } 00:09:37.817 ] 00:09:37.817 } 00:09:37.817 ] 00:09:37.817 } 00:09:37.817 [2024-11-28 11:40:07.744522] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:37.817 [2024-11-28 11:40:07.744642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:09:37.817 [2024-11-28 11:40:07.869157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:37.817 [2024-11-28 11:40:07.897104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.817 [2024-11-28 11:40:07.934257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.076 [2024-11-28 11:40:07.992913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.076 [2024-11-28 11:40:08.058031] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:38.076 [2024-11-28 11:40:08.058158] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.076 [2024-11-28 11:40:08.196006] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.335 00:09:38.335 real 0m0.572s 00:09:38.335 user 0m0.367s 00:09:38.335 sys 0m0.162s 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.335 ************************************ 00:09:38.335 END TEST dd_invalid_seek 00:09:38.335 ************************************ 00:09:38.335 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:38.336 ************************************ 00:09:38.336 START TEST dd_invalid_skip 00:09:38.336 ************************************ 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.336 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:38.336 { 00:09:38.336 "subsystems": [ 00:09:38.336 { 00:09:38.336 "subsystem": "bdev", 00:09:38.336 "config": [ 00:09:38.336 { 00:09:38.336 "params": { 00:09:38.336 "block_size": 512, 00:09:38.336 "num_blocks": 512, 00:09:38.336 "name": "malloc0" 00:09:38.336 }, 00:09:38.336 "method": "bdev_malloc_create" 00:09:38.336 }, 00:09:38.336 { 00:09:38.336 "params": { 00:09:38.336 "block_size": 512, 00:09:38.336 "num_blocks": 512, 00:09:38.336 "name": "malloc1" 00:09:38.336 }, 00:09:38.336 "method": "bdev_malloc_create" 00:09:38.336 }, 00:09:38.336 { 00:09:38.336 "method": "bdev_wait_for_examine" 00:09:38.336 } 00:09:38.336 ] 00:09:38.336 } 00:09:38.336 ] 00:09:38.336 } 00:09:38.336 [2024-11-28 11:40:08.372599] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:38.336 [2024-11-28 11:40:08.372695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76041 ] 00:09:38.596 [2024-11-28 11:40:08.498190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:38.596 [2024-11-28 11:40:08.523573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.596 [2024-11-28 11:40:08.574774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.596 [2024-11-28 11:40:08.633988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.596 [2024-11-28 11:40:08.697482] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:38.596 [2024-11-28 11:40:08.697555] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.856 [2024-11-28 11:40:08.828582] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:38.856 ************************************ 00:09:38.856 END TEST dd_invalid_skip 00:09:38.856 ************************************ 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.856 00:09:38.856 real 0m0.577s 00:09:38.856 user 0m0.360s 00:09:38.856 sys 0m0.167s 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:38.856 ************************************ 00:09:38.856 START TEST dd_invalid_input_count 00:09:38.856 ************************************ 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.856 11:40:08 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:39.116 { 00:09:39.116 "subsystems": [ 00:09:39.116 { 00:09:39.116 "subsystem": "bdev", 00:09:39.116 "config": [ 00:09:39.116 { 00:09:39.116 "params": { 00:09:39.116 "block_size": 512, 00:09:39.116 "num_blocks": 512, 00:09:39.116 "name": "malloc0" 00:09:39.116 }, 00:09:39.116 "method": "bdev_malloc_create" 00:09:39.116 }, 00:09:39.116 { 00:09:39.116 "params": { 00:09:39.116 "block_size": 512, 00:09:39.116 "num_blocks": 512, 00:09:39.116 "name": "malloc1" 00:09:39.116 }, 00:09:39.116 "method": "bdev_malloc_create" 00:09:39.116 }, 00:09:39.116 { 00:09:39.116 "method": "bdev_wait_for_examine" 00:09:39.116 } 00:09:39.116 ] 00:09:39.116 } 00:09:39.116 ] 00:09:39.116 } 00:09:39.116 [2024-11-28 11:40:09.002371] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:39.116 [2024-11-28 11:40:09.002618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:09:39.116 [2024-11-28 11:40:09.128432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.116 [2024-11-28 11:40:09.154653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.116 [2024-11-28 11:40:09.216077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.376 [2024-11-28 11:40:09.277965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.376 [2024-11-28 11:40:09.342263] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:39.376 [2024-11-28 11:40:09.342365] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.376 [2024-11-28 11:40:09.468800] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:39.636 ************************************ 00:09:39.636 END TEST dd_invalid_input_count 00:09:39.636 ************************************ 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.636 00:09:39.636 real 0m0.591s 00:09:39.636 user 0m0.358s 00:09:39.636 sys 0m0.187s 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 ************************************ 00:09:39.636 START TEST dd_invalid_output_count 00:09:39.636 ************************************ 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.636 11:40:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:39.636 { 00:09:39.636 "subsystems": [ 00:09:39.636 { 00:09:39.636 "subsystem": "bdev", 00:09:39.636 "config": [ 00:09:39.636 { 00:09:39.636 "params": { 00:09:39.636 "block_size": 512, 00:09:39.636 "num_blocks": 512, 00:09:39.636 "name": "malloc0" 00:09:39.636 }, 00:09:39.636 "method": "bdev_malloc_create" 00:09:39.636 }, 00:09:39.636 { 00:09:39.636 "method": "bdev_wait_for_examine" 00:09:39.636 } 00:09:39.636 ] 00:09:39.636 } 00:09:39.636 ] 00:09:39.636 } 00:09:39.636 [2024-11-28 11:40:09.649348] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:39.636 [2024-11-28 11:40:09.649443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76108 ] 00:09:39.896 [2024-11-28 11:40:09.774707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.896 [2024-11-28 11:40:09.803954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.896 [2024-11-28 11:40:09.845777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.896 [2024-11-28 11:40:09.901295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.896 [2024-11-28 11:40:09.955731] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:39.896 [2024-11-28 11:40:09.955802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.156 [2024-11-28 11:40:10.081593] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.156 00:09:40.156 real 0m0.554s 00:09:40.156 user 0m0.346s 00:09:40.156 sys 0m0.164s 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.156 ************************************ 00:09:40.156 END TEST dd_invalid_output_count 00:09:40.156 ************************************ 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.156 ************************************ 00:09:40.156 START TEST dd_bs_not_multiple 00:09:40.156 ************************************ 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:40.156 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:40.156 { 00:09:40.156 "subsystems": [ 00:09:40.156 { 00:09:40.156 "subsystem": "bdev", 00:09:40.156 "config": [ 00:09:40.156 { 00:09:40.156 "params": { 00:09:40.156 "block_size": 512, 00:09:40.156 "num_blocks": 512, 00:09:40.156 "name": "malloc0" 00:09:40.156 }, 00:09:40.156 "method": "bdev_malloc_create" 00:09:40.156 }, 00:09:40.156 { 00:09:40.156 "params": { 00:09:40.156 "block_size": 512, 00:09:40.156 "num_blocks": 512, 00:09:40.156 "name": "malloc1" 00:09:40.156 }, 00:09:40.156 "method": "bdev_malloc_create" 00:09:40.156 }, 00:09:40.156 { 00:09:40.156 "method": "bdev_wait_for_examine" 00:09:40.156 } 00:09:40.156 ] 00:09:40.156 } 00:09:40.156 ] 00:09:40.156 } 00:09:40.156 [2024-11-28 11:40:10.249568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:40.156 [2024-11-28 11:40:10.249666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76145 ] 00:09:40.416 [2024-11-28 11:40:10.375637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:40.416 [2024-11-28 11:40:10.398453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.416 [2024-11-28 11:40:10.451817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.416 [2024-11-28 11:40:10.510234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.675 [2024-11-28 11:40:10.576539] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:40.675 [2024-11-28 11:40:10.576623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.675 [2024-11-28 11:40:10.714925] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:40.675 ************************************ 00:09:40.675 END TEST dd_bs_not_multiple 00:09:40.675 ************************************ 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.675 00:09:40.675 real 0m0.592s 00:09:40.675 user 0m0.371s 00:09:40.675 sys 0m0.177s 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.675 11:40:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:40.934 ************************************ 00:09:40.934 END TEST spdk_dd_negative 00:09:40.934 ************************************ 00:09:40.934 00:09:40.934 real 0m6.142s 00:09:40.934 user 0m3.421s 00:09:40.934 sys 0m2.110s 00:09:40.934 11:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.934 11:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.934 ************************************ 00:09:40.934 END TEST spdk_dd 00:09:40.934 ************************************ 00:09:40.934 00:09:40.934 real 1m19.065s 00:09:40.934 user 0m49.781s 00:09:40.934 sys 0m36.635s 00:09:40.934 11:40:10 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.934 11:40:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:40.934 11:40:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:40.934 11:40:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:40.934 11:40:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:40.934 11:40:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.934 11:40:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.934 11:40:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:40.935 11:40:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:40.935 11:40:10 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:40.935 11:40:10 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:40.935 11:40:10 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:40.935 11:40:10 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:40.935 11:40:10 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:40.935 11:40:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.935 11:40:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.935 11:40:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.935 ************************************ 00:09:40.935 START TEST nvmf_tcp 00:09:40.935 ************************************ 00:09:40.935 11:40:10 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:40.935 * Looking for test storage... 00:09:40.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:40.935 11:40:11 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.935 11:40:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.935 11:40:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.193 11:40:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.193 --rc genhtml_branch_coverage=1 00:09:41.193 --rc genhtml_function_coverage=1 00:09:41.193 --rc genhtml_legend=1 00:09:41.193 --rc geninfo_all_blocks=1 00:09:41.193 --rc geninfo_unexecuted_blocks=1 00:09:41.193 00:09:41.193 ' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.193 --rc genhtml_branch_coverage=1 00:09:41.193 --rc genhtml_function_coverage=1 00:09:41.193 --rc genhtml_legend=1 00:09:41.193 --rc geninfo_all_blocks=1 00:09:41.193 --rc geninfo_unexecuted_blocks=1 00:09:41.193 00:09:41.193 ' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.193 --rc genhtml_branch_coverage=1 00:09:41.193 --rc genhtml_function_coverage=1 00:09:41.193 --rc genhtml_legend=1 00:09:41.193 --rc geninfo_all_blocks=1 00:09:41.193 --rc geninfo_unexecuted_blocks=1 00:09:41.193 00:09:41.193 ' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.193 --rc genhtml_branch_coverage=1 00:09:41.193 --rc genhtml_function_coverage=1 00:09:41.193 --rc genhtml_legend=1 00:09:41.193 --rc geninfo_all_blocks=1 00:09:41.193 --rc geninfo_unexecuted_blocks=1 00:09:41.193 00:09:41.193 ' 00:09:41.193 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:41.193 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:41.193 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.193 11:40:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.193 ************************************ 00:09:41.193 START TEST nvmf_target_core 00:09:41.193 ************************************ 00:09:41.193 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:41.193 * Looking for test storage... 00:09:41.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:41.193 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.193 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.193 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.453 --rc genhtml_branch_coverage=1 00:09:41.453 --rc genhtml_function_coverage=1 00:09:41.453 --rc genhtml_legend=1 00:09:41.453 --rc geninfo_all_blocks=1 00:09:41.453 --rc geninfo_unexecuted_blocks=1 00:09:41.453 00:09:41.453 ' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.453 --rc genhtml_branch_coverage=1 00:09:41.453 --rc genhtml_function_coverage=1 00:09:41.453 --rc genhtml_legend=1 00:09:41.453 --rc geninfo_all_blocks=1 00:09:41.453 --rc geninfo_unexecuted_blocks=1 00:09:41.453 00:09:41.453 ' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.453 --rc genhtml_branch_coverage=1 00:09:41.453 --rc genhtml_function_coverage=1 00:09:41.453 --rc genhtml_legend=1 00:09:41.453 --rc geninfo_all_blocks=1 00:09:41.453 --rc geninfo_unexecuted_blocks=1 00:09:41.453 00:09:41.453 ' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.453 --rc genhtml_branch_coverage=1 00:09:41.453 --rc genhtml_function_coverage=1 00:09:41.453 --rc genhtml_legend=1 00:09:41.453 --rc geninfo_all_blocks=1 00:09:41.453 --rc geninfo_unexecuted_blocks=1 00:09:41.453 00:09:41.453 ' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.453 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.454 ************************************ 00:09:41.454 START TEST nvmf_host_management 00:09:41.454 ************************************ 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:41.454 * Looking for test storage... 00:09:41.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.454 --rc genhtml_branch_coverage=1 00:09:41.454 --rc genhtml_function_coverage=1 00:09:41.454 --rc genhtml_legend=1 00:09:41.454 --rc geninfo_all_blocks=1 00:09:41.454 --rc geninfo_unexecuted_blocks=1 00:09:41.454 00:09:41.454 ' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.454 --rc genhtml_branch_coverage=1 00:09:41.454 --rc genhtml_function_coverage=1 00:09:41.454 --rc genhtml_legend=1 00:09:41.454 --rc geninfo_all_blocks=1 00:09:41.454 --rc geninfo_unexecuted_blocks=1 00:09:41.454 00:09:41.454 ' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.454 --rc genhtml_branch_coverage=1 00:09:41.454 --rc genhtml_function_coverage=1 00:09:41.454 --rc genhtml_legend=1 00:09:41.454 --rc geninfo_all_blocks=1 00:09:41.454 --rc geninfo_unexecuted_blocks=1 00:09:41.454 00:09:41.454 ' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.454 --rc genhtml_branch_coverage=1 00:09:41.454 --rc genhtml_function_coverage=1 00:09:41.454 --rc genhtml_legend=1 00:09:41.454 --rc geninfo_all_blocks=1 00:09:41.454 --rc geninfo_unexecuted_blocks=1 00:09:41.454 00:09:41.454 ' 00:09:41.454 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.714 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.715 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.715 Cannot find device "nvmf_init_br" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.715 Cannot find device "nvmf_init_br2" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.715 Cannot find device "nvmf_tgt_br" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.715 Cannot find device "nvmf_tgt_br2" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.715 Cannot find device "nvmf_init_br" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:41.715 Cannot find device "nvmf_init_br2" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.715 Cannot find device "nvmf_tgt_br" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.715 Cannot find device "nvmf_tgt_br2" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.715 Cannot find device "nvmf_br" 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:41.715 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.716 Cannot find device "nvmf_init_if" 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.716 Cannot find device "nvmf_init_if2" 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.716 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.975 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.976 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:09:41.976 00:09:41.976 --- 10.0.0.3 ping statistics --- 00:09:41.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.976 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.976 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.976 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:09:41.976 00:09:41.976 --- 10.0.0.4 ping statistics --- 00:09:41.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.976 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:41.976 00:09:41.976 --- 10.0.0.1 ping statistics --- 00:09:41.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.976 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:41.976 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:42.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:42.235 00:09:42.235 --- 10.0.0.2 ping statistics --- 00:09:42.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.235 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=76486 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 76486 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76486 ']' 00:09:42.235 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.236 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.236 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.236 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.236 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.236 [2024-11-28 11:40:12.196530] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:42.236 [2024-11-28 11:40:12.196845] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.236 [2024-11-28 11:40:12.327263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.495 [2024-11-28 11:40:12.360461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.495 [2024-11-28 11:40:12.418906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.495 [2024-11-28 11:40:12.419274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.495 [2024-11-28 11:40:12.419651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.495 [2024-11-28 11:40:12.419849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.495 [2024-11-28 11:40:12.419974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.495 [2024-11-28 11:40:12.421332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.495 [2024-11-28 11:40:12.421562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.495 [2024-11-28 11:40:12.421715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.495 [2024-11-28 11:40:12.421721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.495 [2024-11-28 11:40:12.485092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 [2024-11-28 11:40:13.274024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 Malloc0 00:09:43.431 [2024-11-28 11:40:13.347770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=76546 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 76546 /var/tmp/bdevperf.sock 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76546 ']' 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.431 { 00:09:43.431 "params": { 00:09:43.431 "name": "Nvme$subsystem", 00:09:43.431 "trtype": "$TEST_TRANSPORT", 00:09:43.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.431 "adrfam": "ipv4", 00:09:43.431 "trsvcid": "$NVMF_PORT", 00:09:43.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.431 "hdgst": ${hdgst:-false}, 00:09:43.431 "ddgst": ${ddgst:-false} 00:09:43.431 }, 00:09:43.431 "method": "bdev_nvme_attach_controller" 00:09:43.431 } 00:09:43.431 EOF 00:09:43.431 )") 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:43.431 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.431 "params": { 00:09:43.431 "name": "Nvme0", 00:09:43.431 "trtype": "tcp", 00:09:43.431 "traddr": "10.0.0.3", 00:09:43.431 "adrfam": "ipv4", 00:09:43.431 "trsvcid": "4420", 00:09:43.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:43.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:43.431 "hdgst": false, 00:09:43.431 "ddgst": false 00:09:43.431 }, 00:09:43.431 "method": "bdev_nvme_attach_controller" 00:09:43.431 }' 00:09:43.431 [2024-11-28 11:40:13.457545] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:43.431 [2024-11-28 11:40:13.458274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76546 ] 00:09:43.690 [2024-11-28 11:40:13.586151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:43.690 [2024-11-28 11:40:13.618905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.690 [2024-11-28 11:40:13.665967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.690 [2024-11-28 11:40:13.734115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.950 Running I/O for 10 seconds... 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:43.950 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:43.951 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.211 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:44.211 [2024-11-28 11:40:14.295907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.296984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.296995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.212 [2024-11-28 11:40:14.297436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.212 [2024-11-28 11:40:14.297449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.213 [2024-11-28 11:40:14.297956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.297967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282e30 is same with the state(6) to be set 00:09:44.213 [2024-11-28 11:40:14.298179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.213 [2024-11-28 11:40:14.298199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.298211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.213 [2024-11-28 11:40:14.298220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.298230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.213 [2024-11-28 11:40:14.298239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.298249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.213 [2024-11-28 11:40:14.298259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.298268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bbb0 is same with the state(6) to be set 00:09:44.213 [2024-11-28 11:40:14.299385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:44.213 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.213 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:44.213 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.213 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:44.213 00:09:44.213 Latency(us) 00:09:44.213 [2024-11-28T11:40:14.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.213 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:44.213 Job: Nvme0n1 ended in about 0.45 seconds with error 00:09:44.213 Verification LBA range: start 0x0 length 0x400 00:09:44.213 Nvme0n1 : 0.45 1436.22 89.76 143.62 0.00 39007.37 2844.86 40036.54 00:09:44.213 [2024-11-28T11:40:14.339Z] =================================================================================================================== 00:09:44.213 [2024-11-28T11:40:14.339Z] Total : 1436.22 89.76 143.62 0.00 39007.37 2844.86 40036.54 00:09:44.213 [2024-11-28 11:40:14.301390] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.213 [2024-11-28 11:40:14.301414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bbb0 (9): Bad file descriptor 00:09:44.213 [2024-11-28 11:40:14.308229] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:44.213 [2024-11-28 11:40:14.308360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:44.213 [2024-11-28 11:40:14.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.213 [2024-11-28 11:40:14.308405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:44.213 [2024-11-28 11:40:14.308416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:44.213 [2024-11-28 11:40:14.308425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:44.213 [2024-11-28 11:40:14.308439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x106bbb0 00:09:44.213 [2024-11-28 11:40:14.308477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106bbb0 (9): Bad file descriptor 00:09:44.213 [2024-11-28 11:40:14.308496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:09:44.214 [2024-11-28 11:40:14.308505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:09:44.214 [2024-11-28 11:40:14.308516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:09:44.214 [2024-11-28 11:40:14.308534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:09:44.214 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.214 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 76546 00:09:45.597 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (76546) - No such process 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.597 { 00:09:45.597 "params": { 00:09:45.597 "name": "Nvme$subsystem", 00:09:45.597 "trtype": "$TEST_TRANSPORT", 00:09:45.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.597 "adrfam": "ipv4", 00:09:45.597 "trsvcid": "$NVMF_PORT", 00:09:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.597 "hdgst": ${hdgst:-false}, 00:09:45.597 "ddgst": ${ddgst:-false} 00:09:45.597 }, 00:09:45.597 "method": "bdev_nvme_attach_controller" 00:09:45.597 } 00:09:45.597 EOF 00:09:45.597 )") 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:45.597 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.597 "params": { 00:09:45.597 "name": "Nvme0", 00:09:45.597 "trtype": "tcp", 00:09:45.597 "traddr": "10.0.0.3", 00:09:45.597 "adrfam": "ipv4", 00:09:45.597 "trsvcid": "4420", 00:09:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:45.597 "hdgst": false, 00:09:45.597 "ddgst": false 00:09:45.597 }, 00:09:45.597 "method": "bdev_nvme_attach_controller" 00:09:45.597 }' 00:09:45.597 [2024-11-28 11:40:15.386087] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:45.597 [2024-11-28 11:40:15.386212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76586 ] 00:09:45.597 [2024-11-28 11:40:15.517485] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.597 [2024-11-28 11:40:15.543067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.597 [2024-11-28 11:40:15.590860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.597 [2024-11-28 11:40:15.654109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.856 Running I/O for 1 seconds... 00:09:46.791 1344.00 IOPS, 84.00 MiB/s 00:09:46.791 Latency(us) 00:09:46.791 [2024-11-28T11:40:16.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.791 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:46.791 Verification LBA range: start 0x0 length 0x400 00:09:46.791 Nvme0n1 : 1.01 1389.11 86.82 0.00 0.00 45166.81 4736.47 38844.97 00:09:46.791 [2024-11-28T11:40:16.917Z] =================================================================================================================== 00:09:46.791 [2024-11-28T11:40:16.917Z] Total : 1389.11 86.82 0.00 0.00 45166.81 4736.47 38844.97 00:09:47.049 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:47.049 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:47.049 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:47.050 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:47.050 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:47.050 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.050 11:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.050 rmmod nvme_tcp 00:09:47.050 rmmod nvme_fabrics 00:09:47.050 rmmod nvme_keyring 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 76486 ']' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 76486 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 76486 ']' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 76486 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76486 00:09:47.050 killing process with pid 76486 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76486' 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 76486 00:09:47.050 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 76486 00:09:47.307 [2024-11-28 11:40:17.376653] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:47.307 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:47.565 00:09:47.565 real 0m6.270s 00:09:47.565 user 0m22.490s 00:09:47.565 sys 0m1.608s 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.565 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.565 ************************************ 00:09:47.565 END TEST nvmf_host_management 00:09:47.565 ************************************ 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.822 ************************************ 00:09:47.822 START TEST nvmf_lvol 00:09:47.822 ************************************ 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:47.822 * Looking for test storage... 00:09:47.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.822 --rc genhtml_branch_coverage=1 00:09:47.822 --rc genhtml_function_coverage=1 00:09:47.822 --rc genhtml_legend=1 00:09:47.822 --rc geninfo_all_blocks=1 00:09:47.822 --rc geninfo_unexecuted_blocks=1 00:09:47.822 00:09:47.822 ' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.822 --rc genhtml_branch_coverage=1 00:09:47.822 --rc genhtml_function_coverage=1 00:09:47.822 --rc genhtml_legend=1 00:09:47.822 --rc geninfo_all_blocks=1 00:09:47.822 --rc geninfo_unexecuted_blocks=1 00:09:47.822 00:09:47.822 ' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.822 --rc genhtml_branch_coverage=1 00:09:47.822 --rc genhtml_function_coverage=1 00:09:47.822 --rc genhtml_legend=1 00:09:47.822 --rc geninfo_all_blocks=1 00:09:47.822 --rc geninfo_unexecuted_blocks=1 00:09:47.822 00:09:47.822 ' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.822 --rc genhtml_branch_coverage=1 00:09:47.822 --rc genhtml_function_coverage=1 00:09:47.822 --rc genhtml_legend=1 00:09:47.822 --rc geninfo_all_blocks=1 00:09:47.822 --rc geninfo_unexecuted_blocks=1 00:09:47.822 00:09:47.822 ' 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.822 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.823 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.823 Cannot find device "nvmf_init_br" 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:47.823 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:48.081 Cannot find device "nvmf_init_br2" 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:48.081 Cannot find device "nvmf_tgt_br" 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:48.081 Cannot find device "nvmf_tgt_br2" 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:48.081 Cannot find device "nvmf_init_br" 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:48.081 Cannot find device "nvmf_init_br2" 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:48.081 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:48.081 Cannot find device "nvmf_tgt_br" 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:48.081 Cannot find device "nvmf_tgt_br2" 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:48.081 Cannot find device "nvmf_br" 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:48.081 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:48.081 Cannot find device "nvmf_init_if" 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:48.082 Cannot find device "nvmf_init_if2" 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:48.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:48.082 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:48.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:48.341 00:09:48.341 --- 10.0.0.3 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:48.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:09:48.341 00:09:48.341 --- 10.0.0.4 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:48.341 00:09:48.341 --- 10.0.0.1 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:48.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:48.341 00:09:48.341 --- 10.0.0.2 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.341 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=76849 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 76849 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 76849 ']' 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.342 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.342 [2024-11-28 11:40:18.361847] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:48.342 [2024-11-28 11:40:18.361962] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.601 [2024-11-28 11:40:18.497696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:48.601 [2024-11-28 11:40:18.531078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.601 [2024-11-28 11:40:18.591449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.601 [2024-11-28 11:40:18.591835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.601 [2024-11-28 11:40:18.592147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.601 [2024-11-28 11:40:18.592493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.601 [2024-11-28 11:40:18.592670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.601 [2024-11-28 11:40:18.594320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.601 [2024-11-28 11:40:18.594448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.601 [2024-11-28 11:40:18.594457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.601 [2024-11-28 11:40:18.659289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.537 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.795 [2024-11-28 11:40:19.662501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.795 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.054 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:50.054 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.386 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:50.386 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:50.651 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:50.910 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=07259b85-3750-45ee-acfe-d3a9267c19a0 00:09:50.910 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 07259b85-3750-45ee-acfe-d3a9267c19a0 lvol 20 00:09:51.478 11:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2580b2f6-c0eb-4e3d-82e2-dc02df5b24ef 00:09:51.478 11:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:51.737 11:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2580b2f6-c0eb-4e3d-82e2-dc02df5b24ef 00:09:51.996 11:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:52.256 [2024-11-28 11:40:22.128438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:52.256 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:52.516 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=76931 00:09:52.516 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:52.516 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:53.452 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2580b2f6-c0eb-4e3d-82e2-dc02df5b24ef MY_SNAPSHOT 00:09:53.711 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4cc460cb-c369-41fe-a4ba-5cc075b7bfcc 00:09:53.711 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2580b2f6-c0eb-4e3d-82e2-dc02df5b24ef 30 00:09:54.278 11:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4cc460cb-c369-41fe-a4ba-5cc075b7bfcc MY_CLONE 00:09:54.537 11:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d1411872-6c60-4051-b115-eec17e01ecf6 00:09:54.537 11:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d1411872-6c60-4051-b115-eec17e01ecf6 00:09:54.796 11:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 76931 00:10:02.920 Initializing NVMe Controllers 00:10:02.920 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:02.920 Controller IO queue size 128, less than required. 00:10:02.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.920 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:02.920 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:02.920 Initialization complete. Launching workers. 00:10:02.920 ======================================================== 00:10:02.920 Latency(us) 00:10:02.920 Device Information : IOPS MiB/s Average min max 00:10:02.920 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9385.00 36.66 13647.46 2657.90 80082.59 00:10:02.920 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9169.00 35.82 13971.30 3352.86 57319.83 00:10:02.920 ======================================================== 00:10:02.920 Total : 18554.00 72.48 13807.49 2657.90 80082.59 00:10:02.920 00:10:02.920 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.179 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2580b2f6-c0eb-4e3d-82e2-dc02df5b24ef 00:10:03.439 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07259b85-3750-45ee-acfe-d3a9267c19a0 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.706 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.706 rmmod nvme_tcp 00:10:03.706 rmmod nvme_fabrics 00:10:03.706 rmmod nvme_keyring 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 76849 ']' 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 76849 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 76849 ']' 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 76849 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76849 00:10:03.992 killing process with pid 76849 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76849' 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 76849 00:10:03.992 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 76849 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:04.251 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.252 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:04.510 ************************************ 00:10:04.510 END TEST nvmf_lvol 00:10:04.510 ************************************ 00:10:04.510 00:10:04.510 real 0m16.689s 00:10:04.510 user 1m8.092s 00:10:04.510 sys 0m4.371s 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.510 11:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.511 ************************************ 00:10:04.511 START TEST nvmf_lvs_grow 00:10:04.511 ************************************ 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:04.511 * Looking for test storage... 00:10:04.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.511 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.770 --rc genhtml_branch_coverage=1 00:10:04.770 --rc genhtml_function_coverage=1 00:10:04.770 --rc genhtml_legend=1 00:10:04.770 --rc geninfo_all_blocks=1 00:10:04.770 --rc geninfo_unexecuted_blocks=1 00:10:04.770 00:10:04.770 ' 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.770 --rc genhtml_branch_coverage=1 00:10:04.770 --rc genhtml_function_coverage=1 00:10:04.770 --rc genhtml_legend=1 00:10:04.770 --rc geninfo_all_blocks=1 00:10:04.770 --rc geninfo_unexecuted_blocks=1 00:10:04.770 00:10:04.770 ' 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.770 --rc genhtml_branch_coverage=1 00:10:04.770 --rc genhtml_function_coverage=1 00:10:04.770 --rc genhtml_legend=1 00:10:04.770 --rc geninfo_all_blocks=1 00:10:04.770 --rc geninfo_unexecuted_blocks=1 00:10:04.770 00:10:04.770 ' 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.770 --rc genhtml_branch_coverage=1 00:10:04.770 --rc genhtml_function_coverage=1 00:10:04.770 --rc genhtml_legend=1 00:10:04.770 --rc geninfo_all_blocks=1 00:10:04.770 --rc geninfo_unexecuted_blocks=1 00:10:04.770 00:10:04.770 ' 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:04.770 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.771 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:04.771 Cannot find device "nvmf_init_br" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:04.771 Cannot find device "nvmf_init_br2" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:04.771 Cannot find device "nvmf_tgt_br" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.771 Cannot find device "nvmf_tgt_br2" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:04.771 Cannot find device "nvmf_init_br" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:04.771 Cannot find device "nvmf_init_br2" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:04.771 Cannot find device "nvmf_tgt_br" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:04.771 Cannot find device "nvmf_tgt_br2" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:04.771 Cannot find device "nvmf_br" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:04.771 Cannot find device "nvmf_init_if" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:04.771 Cannot find device "nvmf_init_if2" 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.771 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:04.772 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:05.032 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:05.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:10:05.032 00:10:05.032 --- 10.0.0.3 ping statistics --- 00:10:05.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.032 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:05.032 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:05.032 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:10:05.032 00:10:05.032 --- 10.0.0.4 ping statistics --- 00:10:05.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.032 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:05.032 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:05.032 00:10:05.033 --- 10.0.0.1 ping statistics --- 00:10:05.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.033 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:05.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:05.033 00:10:05.033 --- 10.0.0.2 ping statistics --- 00:10:05.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.033 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=77313 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 77313 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 77313 ']' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.033 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.033 [2024-11-28 11:40:35.130603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:05.033 [2024-11-28 11:40:35.130713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.292 [2024-11-28 11:40:35.257284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:05.292 [2024-11-28 11:40:35.283418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.292 [2024-11-28 11:40:35.331440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.292 [2024-11-28 11:40:35.331496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.292 [2024-11-28 11:40:35.331525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.292 [2024-11-28 11:40:35.331533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.292 [2024-11-28 11:40:35.331540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.292 [2024-11-28 11:40:35.331963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.292 [2024-11-28 11:40:35.390979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.550 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:05.810 [2024-11-28 11:40:35.784912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:05.810 ************************************ 00:10:05.810 START TEST lvs_grow_clean 00:10:05.810 ************************************ 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:05.810 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:06.070 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:06.070 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:06.639 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:06.639 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:06.639 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:06.898 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:06.898 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:06.898 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd lvol 150 00:10:07.156 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=eb3bdcae-6254-4cac-abed-29d61c369333 00:10:07.156 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.156 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:07.415 [2024-11-28 11:40:37.428285] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:07.415 [2024-11-28 11:40:37.428420] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:07.415 true 00:10:07.415 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:07.415 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:07.674 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:07.674 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:07.932 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb3bdcae-6254-4cac-abed-29d61c369333 00:10:08.192 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:08.770 [2024-11-28 11:40:38.589123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:08.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77399 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77399 /var/tmp/bdevperf.sock 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 77399 ']' 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.770 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:09.031 [2024-11-28 11:40:38.922887] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:09.031 [2024-11-28 11:40:38.922985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77399 ] 00:10:09.031 [2024-11-28 11:40:39.044822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:09.031 [2024-11-28 11:40:39.077664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.031 [2024-11-28 11:40:39.139393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.290 [2024-11-28 11:40:39.205340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.914 11:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.914 11:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:09.914 11:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:10.172 Nvme0n1 00:10:10.172 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:10.431 [ 00:10:10.431 { 00:10:10.431 "name": "Nvme0n1", 00:10:10.431 "aliases": [ 00:10:10.431 "eb3bdcae-6254-4cac-abed-29d61c369333" 00:10:10.431 ], 00:10:10.431 "product_name": "NVMe disk", 00:10:10.431 "block_size": 4096, 00:10:10.431 "num_blocks": 38912, 00:10:10.431 "uuid": "eb3bdcae-6254-4cac-abed-29d61c369333", 00:10:10.431 "numa_id": -1, 00:10:10.431 "assigned_rate_limits": { 00:10:10.431 "rw_ios_per_sec": 0, 00:10:10.431 "rw_mbytes_per_sec": 0, 00:10:10.431 "r_mbytes_per_sec": 0, 00:10:10.431 "w_mbytes_per_sec": 0 00:10:10.431 }, 00:10:10.431 "claimed": false, 00:10:10.431 "zoned": false, 00:10:10.431 "supported_io_types": { 00:10:10.431 "read": true, 00:10:10.431 "write": true, 00:10:10.431 "unmap": true, 00:10:10.431 "flush": true, 00:10:10.431 "reset": true, 00:10:10.431 "nvme_admin": true, 00:10:10.431 "nvme_io": true, 00:10:10.431 "nvme_io_md": false, 00:10:10.431 "write_zeroes": true, 00:10:10.431 "zcopy": false, 00:10:10.431 "get_zone_info": false, 00:10:10.431 "zone_management": false, 00:10:10.431 "zone_append": false, 00:10:10.431 "compare": true, 00:10:10.431 "compare_and_write": true, 00:10:10.431 "abort": true, 00:10:10.431 "seek_hole": false, 00:10:10.431 "seek_data": false, 00:10:10.431 "copy": true, 00:10:10.431 "nvme_iov_md": false 00:10:10.431 }, 00:10:10.431 "memory_domains": [ 00:10:10.431 { 00:10:10.431 "dma_device_id": "system", 00:10:10.431 "dma_device_type": 1 00:10:10.431 } 00:10:10.431 ], 00:10:10.431 "driver_specific": { 00:10:10.431 "nvme": [ 00:10:10.431 { 00:10:10.431 "trid": { 00:10:10.431 "trtype": "TCP", 00:10:10.431 "adrfam": "IPv4", 00:10:10.431 "traddr": "10.0.0.3", 00:10:10.431 "trsvcid": "4420", 00:10:10.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:10.431 }, 00:10:10.431 "ctrlr_data": { 00:10:10.431 "cntlid": 1, 00:10:10.431 "vendor_id": "0x8086", 00:10:10.431 "model_number": "SPDK bdev Controller", 00:10:10.431 "serial_number": "SPDK0", 00:10:10.431 "firmware_revision": "25.01", 00:10:10.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:10.431 "oacs": { 00:10:10.431 "security": 0, 00:10:10.431 "format": 0, 00:10:10.431 "firmware": 0, 00:10:10.431 "ns_manage": 0 00:10:10.431 }, 00:10:10.431 "multi_ctrlr": true, 00:10:10.431 "ana_reporting": false 00:10:10.431 }, 00:10:10.431 "vs": { 00:10:10.431 "nvme_version": "1.3" 00:10:10.431 }, 00:10:10.431 "ns_data": { 00:10:10.431 "id": 1, 00:10:10.431 "can_share": true 00:10:10.431 } 00:10:10.431 } 00:10:10.431 ], 00:10:10.431 "mp_policy": "active_passive" 00:10:10.431 } 00:10:10.431 } 00:10:10.431 ] 00:10:10.431 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77417 00:10:10.431 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:10.431 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:10.691 Running I/O for 10 seconds... 00:10:11.627 Latency(us) 00:10:11.627 [2024-11-28T11:40:41.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.627 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:10:11.627 [2024-11-28T11:40:41.753Z] =================================================================================================================== 00:10:11.627 [2024-11-28T11:40:41.753Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:10:11.627 00:10:12.565 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:12.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.565 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:12.565 [2024-11-28T11:40:42.691Z] =================================================================================================================== 00:10:12.565 [2024-11-28T11:40:42.691Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:12.565 00:10:12.824 true 00:10:12.824 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:12.824 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:13.083 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:13.083 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:13.083 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77417 00:10:13.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.651 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:10:13.651 [2024-11-28T11:40:43.777Z] =================================================================================================================== 00:10:13.651 [2024-11-28T11:40:43.777Z] Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:10:13.651 00:10:14.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.587 Nvme0n1 : 4.00 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:10:14.587 [2024-11-28T11:40:44.713Z] =================================================================================================================== 00:10:14.587 [2024-11-28T11:40:44.713Z] Total : 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:10:14.587 00:10:15.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.523 Nvme0n1 : 5.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:15.523 [2024-11-28T11:40:45.649Z] =================================================================================================================== 00:10:15.523 [2024-11-28T11:40:45.649Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:15.523 00:10:16.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.934 Nvme0n1 : 6.00 6752.17 26.38 0.00 0.00 0.00 0.00 0.00 00:10:16.934 [2024-11-28T11:40:47.060Z] =================================================================================================================== 00:10:16.934 [2024-11-28T11:40:47.060Z] Total : 6752.17 26.38 0.00 0.00 0.00 0.00 0.00 00:10:16.934 00:10:17.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.870 Nvme0n1 : 7.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:17.870 [2024-11-28T11:40:47.996Z] =================================================================================================================== 00:10:17.870 [2024-11-28T11:40:47.996Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:17.870 00:10:18.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.805 Nvme0n1 : 8.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:10:18.805 [2024-11-28T11:40:48.931Z] =================================================================================================================== 00:10:18.805 [2024-11-28T11:40:48.931Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:10:18.805 00:10:19.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.744 Nvme0n1 : 9.00 6674.56 26.07 0.00 0.00 0.00 0.00 0.00 00:10:19.744 [2024-11-28T11:40:49.870Z] =================================================================================================================== 00:10:19.744 [2024-11-28T11:40:49.870Z] Total : 6674.56 26.07 0.00 0.00 0.00 0.00 0.00 00:10:19.744 00:10:20.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.681 Nvme0n1 : 10.00 6546.30 25.57 0.00 0.00 0.00 0.00 0.00 00:10:20.681 [2024-11-28T11:40:50.807Z] =================================================================================================================== 00:10:20.681 [2024-11-28T11:40:50.807Z] Total : 6546.30 25.57 0.00 0.00 0.00 0.00 0.00 00:10:20.681 00:10:20.681 00:10:20.681 Latency(us) 00:10:20.681 [2024-11-28T11:40:50.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.681 Nvme0n1 : 10.01 6554.59 25.60 0.00 0.00 19522.49 8400.52 182070.92 00:10:20.681 [2024-11-28T11:40:50.807Z] =================================================================================================================== 00:10:20.681 [2024-11-28T11:40:50.807Z] Total : 6554.59 25.60 0.00 0.00 19522.49 8400.52 182070.92 00:10:20.681 { 00:10:20.681 "results": [ 00:10:20.681 { 00:10:20.681 "job": "Nvme0n1", 00:10:20.681 "core_mask": "0x2", 00:10:20.681 "workload": "randwrite", 00:10:20.681 "status": "finished", 00:10:20.681 "queue_depth": 128, 00:10:20.681 "io_size": 4096, 00:10:20.681 "runtime": 10.006881, 00:10:20.681 "iops": 6554.589786767725, 00:10:20.681 "mibps": 25.603866354561426, 00:10:20.681 "io_failed": 0, 00:10:20.681 "io_timeout": 0, 00:10:20.681 "avg_latency_us": 19522.486077663092, 00:10:20.681 "min_latency_us": 8400.523636363636, 00:10:20.681 "max_latency_us": 182070.92363636364 00:10:20.681 } 00:10:20.681 ], 00:10:20.681 "core_count": 1 00:10:20.681 } 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77399 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 77399 ']' 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 77399 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77399 00:10:20.681 killing process with pid 77399 00:10:20.681 Received shutdown signal, test time was about 10.000000 seconds 00:10:20.681 00:10:20.681 Latency(us) 00:10:20.681 [2024-11-28T11:40:50.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.681 [2024-11-28T11:40:50.807Z] =================================================================================================================== 00:10:20.681 [2024-11-28T11:40:50.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77399' 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 77399 00:10:20.681 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 77399 00:10:20.940 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:21.198 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:21.456 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:21.456 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:21.715 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:21.715 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:21.715 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.975 [2024-11-28 11:40:52.018927] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:21.975 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:22.234 request: 00:10:22.234 { 00:10:22.234 "uuid": "0e332a34-ba77-4d85-81f4-d3d75b5377cd", 00:10:22.234 "method": "bdev_lvol_get_lvstores", 00:10:22.234 "req_id": 1 00:10:22.234 } 00:10:22.234 Got JSON-RPC error response 00:10:22.234 response: 00:10:22.234 { 00:10:22.234 "code": -19, 00:10:22.234 "message": "No such device" 00:10:22.234 } 00:10:22.234 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:22.234 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:22.234 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:22.234 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:22.234 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.541 aio_bdev 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eb3bdcae-6254-4cac-abed-29d61c369333 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=eb3bdcae-6254-4cac-abed-29d61c369333 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.541 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:23.124 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb3bdcae-6254-4cac-abed-29d61c369333 -t 2000 00:10:23.124 [ 00:10:23.124 { 00:10:23.124 "name": "eb3bdcae-6254-4cac-abed-29d61c369333", 00:10:23.124 "aliases": [ 00:10:23.124 "lvs/lvol" 00:10:23.124 ], 00:10:23.124 "product_name": "Logical Volume", 00:10:23.124 "block_size": 4096, 00:10:23.124 "num_blocks": 38912, 00:10:23.124 "uuid": "eb3bdcae-6254-4cac-abed-29d61c369333", 00:10:23.124 "assigned_rate_limits": { 00:10:23.124 "rw_ios_per_sec": 0, 00:10:23.124 "rw_mbytes_per_sec": 0, 00:10:23.124 "r_mbytes_per_sec": 0, 00:10:23.124 "w_mbytes_per_sec": 0 00:10:23.124 }, 00:10:23.124 "claimed": false, 00:10:23.124 "zoned": false, 00:10:23.124 "supported_io_types": { 00:10:23.124 "read": true, 00:10:23.124 "write": true, 00:10:23.124 "unmap": true, 00:10:23.124 "flush": false, 00:10:23.124 "reset": true, 00:10:23.124 "nvme_admin": false, 00:10:23.124 "nvme_io": false, 00:10:23.124 "nvme_io_md": false, 00:10:23.125 "write_zeroes": true, 00:10:23.125 "zcopy": false, 00:10:23.125 "get_zone_info": false, 00:10:23.125 "zone_management": false, 00:10:23.125 "zone_append": false, 00:10:23.125 "compare": false, 00:10:23.125 "compare_and_write": false, 00:10:23.125 "abort": false, 00:10:23.125 "seek_hole": true, 00:10:23.125 "seek_data": true, 00:10:23.125 "copy": false, 00:10:23.125 "nvme_iov_md": false 00:10:23.125 }, 00:10:23.125 "driver_specific": { 00:10:23.125 "lvol": { 00:10:23.125 "lvol_store_uuid": "0e332a34-ba77-4d85-81f4-d3d75b5377cd", 00:10:23.125 "base_bdev": "aio_bdev", 00:10:23.125 "thin_provision": false, 00:10:23.125 "num_allocated_clusters": 38, 00:10:23.125 "snapshot": false, 00:10:23.125 "clone": false, 00:10:23.125 "esnap_clone": false 00:10:23.125 } 00:10:23.125 } 00:10:23.125 } 00:10:23.125 ] 00:10:23.125 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:23.125 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:23.125 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:23.383 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:23.383 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:23.383 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:23.642 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:23.642 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eb3bdcae-6254-4cac-abed-29d61c369333 00:10:23.901 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e332a34-ba77-4d85-81f4-d3d75b5377cd 00:10:24.159 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:24.418 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:24.986 ************************************ 00:10:24.986 END TEST lvs_grow_clean 00:10:24.986 ************************************ 00:10:24.986 00:10:24.986 real 0m19.141s 00:10:24.986 user 0m18.084s 00:10:24.986 sys 0m2.644s 00:10:24.986 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.986 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 ************************************ 00:10:24.986 START TEST lvs_grow_dirty 00:10:24.986 ************************************ 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:24.986 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.244 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:25.244 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:25.811 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 lvol 150 00:10:26.380 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:26.380 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.380 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:26.380 [2024-11-28 11:40:56.465327] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:26.380 [2024-11-28 11:40:56.465464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:26.380 true 00:10:26.380 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:26.380 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:26.640 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:26.640 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:27.209 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:27.209 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:27.469 [2024-11-28 11:40:57.541984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:27.469 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:28.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77677 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77677 /var/tmp/bdevperf.sock 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77677 ']' 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.038 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:28.038 [2024-11-28 11:40:57.968265] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:28.038 [2024-11-28 11:40:57.968557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77677 ] 00:10:28.038 [2024-11-28 11:40:58.090113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:28.038 [2024-11-28 11:40:58.122370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.297 [2024-11-28 11:40:58.170490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.297 [2024-11-28 11:40:58.229551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.865 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.865 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:28.865 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:29.432 Nvme0n1 00:10:29.432 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:29.691 [ 00:10:29.691 { 00:10:29.691 "name": "Nvme0n1", 00:10:29.691 "aliases": [ 00:10:29.691 "1f2f91a5-4e97-4df4-b88a-469df1ee7789" 00:10:29.691 ], 00:10:29.691 "product_name": "NVMe disk", 00:10:29.691 "block_size": 4096, 00:10:29.691 "num_blocks": 38912, 00:10:29.691 "uuid": "1f2f91a5-4e97-4df4-b88a-469df1ee7789", 00:10:29.691 "numa_id": -1, 00:10:29.691 "assigned_rate_limits": { 00:10:29.691 "rw_ios_per_sec": 0, 00:10:29.691 "rw_mbytes_per_sec": 0, 00:10:29.691 "r_mbytes_per_sec": 0, 00:10:29.691 "w_mbytes_per_sec": 0 00:10:29.691 }, 00:10:29.691 "claimed": false, 00:10:29.691 "zoned": false, 00:10:29.691 "supported_io_types": { 00:10:29.691 "read": true, 00:10:29.692 "write": true, 00:10:29.692 "unmap": true, 00:10:29.692 "flush": true, 00:10:29.692 "reset": true, 00:10:29.692 "nvme_admin": true, 00:10:29.692 "nvme_io": true, 00:10:29.692 "nvme_io_md": false, 00:10:29.692 "write_zeroes": true, 00:10:29.692 "zcopy": false, 00:10:29.692 "get_zone_info": false, 00:10:29.692 "zone_management": false, 00:10:29.692 "zone_append": false, 00:10:29.692 "compare": true, 00:10:29.692 "compare_and_write": true, 00:10:29.692 "abort": true, 00:10:29.692 "seek_hole": false, 00:10:29.692 "seek_data": false, 00:10:29.692 "copy": true, 00:10:29.692 "nvme_iov_md": false 00:10:29.692 }, 00:10:29.692 "memory_domains": [ 00:10:29.692 { 00:10:29.692 "dma_device_id": "system", 00:10:29.692 "dma_device_type": 1 00:10:29.692 } 00:10:29.692 ], 00:10:29.692 "driver_specific": { 00:10:29.692 "nvme": [ 00:10:29.692 { 00:10:29.692 "trid": { 00:10:29.692 "trtype": "TCP", 00:10:29.692 "adrfam": "IPv4", 00:10:29.692 "traddr": "10.0.0.3", 00:10:29.692 "trsvcid": "4420", 00:10:29.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:29.692 }, 00:10:29.692 "ctrlr_data": { 00:10:29.692 "cntlid": 1, 00:10:29.692 "vendor_id": "0x8086", 00:10:29.692 "model_number": "SPDK bdev Controller", 00:10:29.692 "serial_number": "SPDK0", 00:10:29.692 "firmware_revision": "25.01", 00:10:29.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.692 "oacs": { 00:10:29.692 "security": 0, 00:10:29.692 "format": 0, 00:10:29.692 "firmware": 0, 00:10:29.692 "ns_manage": 0 00:10:29.692 }, 00:10:29.692 "multi_ctrlr": true, 00:10:29.692 "ana_reporting": false 00:10:29.692 }, 00:10:29.692 "vs": { 00:10:29.692 "nvme_version": "1.3" 00:10:29.692 }, 00:10:29.692 "ns_data": { 00:10:29.692 "id": 1, 00:10:29.692 "can_share": true 00:10:29.692 } 00:10:29.692 } 00:10:29.692 ], 00:10:29.692 "mp_policy": "active_passive" 00:10:29.692 } 00:10:29.692 } 00:10:29.692 ] 00:10:29.692 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:29.692 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77701 00:10:29.692 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:29.692 Running I/O for 10 seconds... 00:10:30.628 Latency(us) 00:10:30.628 [2024-11-28T11:41:00.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.628 Nvme0n1 : 1.00 6475.00 25.29 0.00 0.00 0.00 0.00 0.00 00:10:30.628 [2024-11-28T11:41:00.754Z] =================================================================================================================== 00:10:30.628 [2024-11-28T11:41:00.754Z] Total : 6475.00 25.29 0.00 0.00 0.00 0.00 0.00 00:10:30.628 00:10:31.562 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:31.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.822 Nvme0n1 : 2.00 6730.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:31.822 [2024-11-28T11:41:01.948Z] =================================================================================================================== 00:10:31.822 [2024-11-28T11:41:01.948Z] Total : 6730.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:31.822 00:10:31.822 true 00:10:31.822 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:31.822 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:32.390 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:32.390 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:32.390 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 77701 00:10:32.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.650 Nvme0n1 : 3.00 6730.33 26.29 0.00 0.00 0.00 0.00 0.00 00:10:32.650 [2024-11-28T11:41:02.776Z] =================================================================================================================== 00:10:32.650 [2024-11-28T11:41:02.776Z] Total : 6730.33 26.29 0.00 0.00 0.00 0.00 0.00 00:10:32.650 00:10:34.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.023 Nvme0n1 : 4.00 6825.75 26.66 0.00 0.00 0.00 0.00 0.00 00:10:34.023 [2024-11-28T11:41:04.149Z] =================================================================================================================== 00:10:34.023 [2024-11-28T11:41:04.149Z] Total : 6825.75 26.66 0.00 0.00 0.00 0.00 0.00 00:10:34.023 00:10:34.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.957 Nvme0n1 : 5.00 6806.80 26.59 0.00 0.00 0.00 0.00 0.00 00:10:34.957 [2024-11-28T11:41:05.083Z] =================================================================================================================== 00:10:34.957 [2024-11-28T11:41:05.083Z] Total : 6806.80 26.59 0.00 0.00 0.00 0.00 0.00 00:10:34.957 00:10:35.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.899 Nvme0n1 : 6.00 6712.83 26.22 0.00 0.00 0.00 0.00 0.00 00:10:35.899 [2024-11-28T11:41:06.025Z] =================================================================================================================== 00:10:35.899 [2024-11-28T11:41:06.025Z] Total : 6712.83 26.22 0.00 0.00 0.00 0.00 0.00 00:10:35.899 00:10:36.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.831 Nvme0n1 : 7.00 6661.00 26.02 0.00 0.00 0.00 0.00 0.00 00:10:36.831 [2024-11-28T11:41:06.957Z] =================================================================================================================== 00:10:36.831 [2024-11-28T11:41:06.957Z] Total : 6661.00 26.02 0.00 0.00 0.00 0.00 0.00 00:10:36.831 00:10:37.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.779 Nvme0n1 : 8.00 6653.88 25.99 0.00 0.00 0.00 0.00 0.00 00:10:37.779 [2024-11-28T11:41:07.905Z] =================================================================================================================== 00:10:37.779 [2024-11-28T11:41:07.905Z] Total : 6653.88 25.99 0.00 0.00 0.00 0.00 0.00 00:10:37.779 00:10:38.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.716 Nvme0n1 : 9.00 6648.33 25.97 0.00 0.00 0.00 0.00 0.00 00:10:38.716 [2024-11-28T11:41:08.842Z] =================================================================================================================== 00:10:38.716 [2024-11-28T11:41:08.842Z] Total : 6648.33 25.97 0.00 0.00 0.00 0.00 0.00 00:10:38.716 00:10:39.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.654 Nvme0n1 : 10.00 6556.90 25.61 0.00 0.00 0.00 0.00 0.00 00:10:39.654 [2024-11-28T11:41:09.780Z] =================================================================================================================== 00:10:39.654 [2024-11-28T11:41:09.780Z] Total : 6556.90 25.61 0.00 0.00 0.00 0.00 0.00 00:10:39.654 00:10:39.654 00:10:39.654 Latency(us) 00:10:39.654 [2024-11-28T11:41:09.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.654 Nvme0n1 : 10.01 6565.68 25.65 0.00 0.00 19489.29 13881.72 96754.97 00:10:39.654 [2024-11-28T11:41:09.780Z] =================================================================================================================== 00:10:39.654 [2024-11-28T11:41:09.780Z] Total : 6565.68 25.65 0.00 0.00 19489.29 13881.72 96754.97 00:10:39.654 { 00:10:39.654 "results": [ 00:10:39.654 { 00:10:39.654 "job": "Nvme0n1", 00:10:39.654 "core_mask": "0x2", 00:10:39.654 "workload": "randwrite", 00:10:39.654 "status": "finished", 00:10:39.654 "queue_depth": 128, 00:10:39.654 "io_size": 4096, 00:10:39.654 "runtime": 10.006119, 00:10:39.654 "iops": 6565.682458903397, 00:10:39.654 "mibps": 25.647197105091394, 00:10:39.654 "io_failed": 0, 00:10:39.654 "io_timeout": 0, 00:10:39.654 "avg_latency_us": 19489.290885899038, 00:10:39.654 "min_latency_us": 13881.716363636364, 00:10:39.654 "max_latency_us": 96754.96727272727 00:10:39.654 } 00:10:39.654 ], 00:10:39.654 "core_count": 1 00:10:39.654 } 00:10:39.654 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77677 00:10:39.654 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 77677 ']' 00:10:39.654 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 77677 00:10:39.654 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:39.654 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.655 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77677 00:10:39.915 killing process with pid 77677 00:10:39.915 Received shutdown signal, test time was about 10.000000 seconds 00:10:39.915 00:10:39.915 Latency(us) 00:10:39.915 [2024-11-28T11:41:10.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.915 [2024-11-28T11:41:10.041Z] =================================================================================================================== 00:10:39.915 [2024-11-28T11:41:10.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77677' 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 77677 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 77677 00:10:39.915 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:40.483 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:40.483 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:40.483 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77313 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77313 00:10:40.742 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77313 Killed "${NVMF_APP[@]}" "$@" 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=77839 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 77839 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77839 ']' 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.742 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.002 [2024-11-28 11:41:10.916469] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:41.002 [2024-11-28 11:41:10.916587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.002 [2024-11-28 11:41:11.048181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.002 [2024-11-28 11:41:11.068417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.002 [2024-11-28 11:41:11.109946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.002 [2024-11-28 11:41:11.110000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.002 [2024-11-28 11:41:11.110026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.002 [2024-11-28 11:41:11.110034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.002 [2024-11-28 11:41:11.110041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.002 [2024-11-28 11:41:11.110477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.262 [2024-11-28 11:41:11.170798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.829 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.088 [2024-11-28 11:41:12.176729] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:42.088 [2024-11-28 11:41:12.177284] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:42.088 [2024-11-28 11:41:12.177466] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.347 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.654 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f2f91a5-4e97-4df4-b88a-469df1ee7789 -t 2000 00:10:42.913 [ 00:10:42.913 { 00:10:42.913 "name": "1f2f91a5-4e97-4df4-b88a-469df1ee7789", 00:10:42.913 "aliases": [ 00:10:42.913 "lvs/lvol" 00:10:42.913 ], 00:10:42.913 "product_name": "Logical Volume", 00:10:42.913 "block_size": 4096, 00:10:42.913 "num_blocks": 38912, 00:10:42.913 "uuid": "1f2f91a5-4e97-4df4-b88a-469df1ee7789", 00:10:42.913 "assigned_rate_limits": { 00:10:42.913 "rw_ios_per_sec": 0, 00:10:42.913 "rw_mbytes_per_sec": 0, 00:10:42.913 "r_mbytes_per_sec": 0, 00:10:42.913 "w_mbytes_per_sec": 0 00:10:42.913 }, 00:10:42.913 "claimed": false, 00:10:42.913 "zoned": false, 00:10:42.913 "supported_io_types": { 00:10:42.913 "read": true, 00:10:42.913 "write": true, 00:10:42.913 "unmap": true, 00:10:42.913 "flush": false, 00:10:42.913 "reset": true, 00:10:42.913 "nvme_admin": false, 00:10:42.913 "nvme_io": false, 00:10:42.913 "nvme_io_md": false, 00:10:42.913 "write_zeroes": true, 00:10:42.913 "zcopy": false, 00:10:42.913 "get_zone_info": false, 00:10:42.913 "zone_management": false, 00:10:42.913 "zone_append": false, 00:10:42.913 "compare": false, 00:10:42.913 "compare_and_write": false, 00:10:42.913 "abort": false, 00:10:42.913 "seek_hole": true, 00:10:42.913 "seek_data": true, 00:10:42.913 "copy": false, 00:10:42.913 "nvme_iov_md": false 00:10:42.913 }, 00:10:42.913 "driver_specific": { 00:10:42.913 "lvol": { 00:10:42.913 "lvol_store_uuid": "ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076", 00:10:42.913 "base_bdev": "aio_bdev", 00:10:42.913 "thin_provision": false, 00:10:42.913 "num_allocated_clusters": 38, 00:10:42.913 "snapshot": false, 00:10:42.913 "clone": false, 00:10:42.913 "esnap_clone": false 00:10:42.913 } 00:10:42.913 } 00:10:42.913 } 00:10:42.913 ] 00:10:42.913 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:42.913 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:42.913 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:43.172 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:43.172 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:43.172 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:43.432 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:43.432 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.692 [2024-11-28 11:41:13.678680] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:43.692 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:43.951 request: 00:10:43.951 { 00:10:43.951 "uuid": "ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076", 00:10:43.951 "method": "bdev_lvol_get_lvstores", 00:10:43.951 "req_id": 1 00:10:43.951 } 00:10:43.951 Got JSON-RPC error response 00:10:43.951 response: 00:10:43.951 { 00:10:43.951 "code": -19, 00:10:43.951 "message": "No such device" 00:10:43.951 } 00:10:43.951 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:43.951 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:43.951 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:43.951 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:43.951 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:44.519 aio_bdev 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:44.519 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f2f91a5-4e97-4df4-b88a-469df1ee7789 -t 2000 00:10:44.807 [ 00:10:44.807 { 00:10:44.807 "name": "1f2f91a5-4e97-4df4-b88a-469df1ee7789", 00:10:44.807 "aliases": [ 00:10:44.807 "lvs/lvol" 00:10:44.807 ], 00:10:44.807 "product_name": "Logical Volume", 00:10:44.807 "block_size": 4096, 00:10:44.807 "num_blocks": 38912, 00:10:44.807 "uuid": "1f2f91a5-4e97-4df4-b88a-469df1ee7789", 00:10:44.807 "assigned_rate_limits": { 00:10:44.807 "rw_ios_per_sec": 0, 00:10:44.807 "rw_mbytes_per_sec": 0, 00:10:44.807 "r_mbytes_per_sec": 0, 00:10:44.807 "w_mbytes_per_sec": 0 00:10:44.807 }, 00:10:44.807 "claimed": false, 00:10:44.807 "zoned": false, 00:10:44.807 "supported_io_types": { 00:10:44.807 "read": true, 00:10:44.807 "write": true, 00:10:44.807 "unmap": true, 00:10:44.807 "flush": false, 00:10:44.807 "reset": true, 00:10:44.807 "nvme_admin": false, 00:10:44.807 "nvme_io": false, 00:10:44.807 "nvme_io_md": false, 00:10:44.807 "write_zeroes": true, 00:10:44.807 "zcopy": false, 00:10:44.807 "get_zone_info": false, 00:10:44.807 "zone_management": false, 00:10:44.807 "zone_append": false, 00:10:44.807 "compare": false, 00:10:44.807 "compare_and_write": false, 00:10:44.807 "abort": false, 00:10:44.807 "seek_hole": true, 00:10:44.807 "seek_data": true, 00:10:44.807 "copy": false, 00:10:44.807 "nvme_iov_md": false 00:10:44.807 }, 00:10:44.807 "driver_specific": { 00:10:44.807 "lvol": { 00:10:44.807 "lvol_store_uuid": "ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076", 00:10:44.807 "base_bdev": "aio_bdev", 00:10:44.807 "thin_provision": false, 00:10:44.807 "num_allocated_clusters": 38, 00:10:44.807 "snapshot": false, 00:10:44.807 "clone": false, 00:10:44.807 "esnap_clone": false 00:10:44.807 } 00:10:44.807 } 00:10:44.807 } 00:10:44.807 ] 00:10:44.807 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:44.807 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:44.807 11:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:45.066 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:45.066 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:45.066 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:45.631 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:45.631 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1f2f91a5-4e97-4df4-b88a-469df1ee7789 00:10:45.890 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef3f5a7b-4ee7-411a-b0a0-1ace2d1b9076 00:10:46.150 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.408 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:46.977 ************************************ 00:10:46.977 END TEST lvs_grow_dirty 00:10:46.977 ************************************ 00:10:46.977 00:10:46.977 real 0m21.783s 00:10:46.977 user 0m44.593s 00:10:46.977 sys 0m8.582s 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:46.977 nvmf_trace.0 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.977 11:41:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.977 rmmod nvme_tcp 00:10:46.977 rmmod nvme_fabrics 00:10:46.977 rmmod nvme_keyring 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 77839 ']' 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 77839 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 77839 ']' 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 77839 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77839 00:10:46.977 killing process with pid 77839 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77839' 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 77839 00:10:46.977 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 77839 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:47.237 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:47.497 00:10:47.497 real 0m43.105s 00:10:47.497 user 1m9.664s 00:10:47.497 sys 0m12.049s 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 ************************************ 00:10:47.497 END TEST nvmf_lvs_grow 00:10:47.497 ************************************ 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 ************************************ 00:10:47.497 START TEST nvmf_bdev_io_wait 00:10:47.497 ************************************ 00:10:47.497 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.757 * Looking for test storage... 00:10:47.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.757 --rc genhtml_branch_coverage=1 00:10:47.757 --rc genhtml_function_coverage=1 00:10:47.757 --rc genhtml_legend=1 00:10:47.757 --rc geninfo_all_blocks=1 00:10:47.757 --rc geninfo_unexecuted_blocks=1 00:10:47.757 00:10:47.757 ' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.757 --rc genhtml_branch_coverage=1 00:10:47.757 --rc genhtml_function_coverage=1 00:10:47.757 --rc genhtml_legend=1 00:10:47.757 --rc geninfo_all_blocks=1 00:10:47.757 --rc geninfo_unexecuted_blocks=1 00:10:47.757 00:10:47.757 ' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.757 --rc genhtml_branch_coverage=1 00:10:47.757 --rc genhtml_function_coverage=1 00:10:47.757 --rc genhtml_legend=1 00:10:47.757 --rc geninfo_all_blocks=1 00:10:47.757 --rc geninfo_unexecuted_blocks=1 00:10:47.757 00:10:47.757 ' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.757 --rc genhtml_branch_coverage=1 00:10:47.757 --rc genhtml_function_coverage=1 00:10:47.757 --rc genhtml_legend=1 00:10:47.757 --rc geninfo_all_blocks=1 00:10:47.757 --rc geninfo_unexecuted_blocks=1 00:10:47.757 00:10:47.757 ' 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.757 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:47.758 Cannot find device "nvmf_init_br" 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:47.758 Cannot find device "nvmf_init_br2" 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:47.758 Cannot find device "nvmf_tgt_br" 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:47.758 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.018 Cannot find device "nvmf_tgt_br2" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:48.018 Cannot find device "nvmf_init_br" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:48.018 Cannot find device "nvmf_init_br2" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:48.018 Cannot find device "nvmf_tgt_br" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:48.018 Cannot find device "nvmf_tgt_br2" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:48.018 Cannot find device "nvmf_br" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:48.018 Cannot find device "nvmf_init_if" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:48.018 Cannot find device "nvmf_init_if2" 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:48.018 11:41:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:48.018 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:48.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:48.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:48.278 00:10:48.278 --- 10.0.0.3 ping statistics --- 00:10:48.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.278 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:48.278 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:48.278 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:48.278 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:10:48.278 00:10:48.278 --- 10.0.0.4 ping statistics --- 00:10:48.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.279 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:48.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:48.279 00:10:48.279 --- 10.0.0.1 ping statistics --- 00:10:48.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.279 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:48.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:48.279 00:10:48.279 --- 10.0.0.2 ping statistics --- 00:10:48.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.279 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=78222 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 78222 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 78222 ']' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.279 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.279 [2024-11-28 11:41:18.275833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.279 [2024-11-28 11:41:18.275954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.279 [2024-11-28 11:41:18.399630] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:48.538 [2024-11-28 11:41:18.426877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.538 [2024-11-28 11:41:18.476669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.538 [2024-11-28 11:41:18.476741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.538 [2024-11-28 11:41:18.476769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.538 [2024-11-28 11:41:18.476778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.538 [2024-11-28 11:41:18.476785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.538 [2024-11-28 11:41:18.477985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.538 [2024-11-28 11:41:18.478121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.538 [2024-11-28 11:41:18.478240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.538 [2024-11-28 11:41:18.478240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 [2024-11-28 11:41:18.653934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 [2024-11-28 11:41:18.670656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 Malloc0 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 [2024-11-28 11:41:18.731482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78250 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78252 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.826 { 00:10:48.826 "params": { 00:10:48.826 "name": "Nvme$subsystem", 00:10:48.826 "trtype": "$TEST_TRANSPORT", 00:10:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.826 "adrfam": "ipv4", 00:10:48.826 "trsvcid": "$NVMF_PORT", 00:10:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.826 "hdgst": ${hdgst:-false}, 00:10:48.826 "ddgst": ${ddgst:-false} 00:10:48.826 }, 00:10:48.826 "method": "bdev_nvme_attach_controller" 00:10:48.826 } 00:10:48.826 EOF 00:10:48.826 )") 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.826 { 00:10:48.826 "params": { 00:10:48.826 "name": "Nvme$subsystem", 00:10:48.826 "trtype": "$TEST_TRANSPORT", 00:10:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.826 "adrfam": "ipv4", 00:10:48.826 "trsvcid": "$NVMF_PORT", 00:10:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.826 "hdgst": ${hdgst:-false}, 00:10:48.826 "ddgst": ${ddgst:-false} 00:10:48.826 }, 00:10:48.826 "method": "bdev_nvme_attach_controller" 00:10:48.826 } 00:10:48.826 EOF 00:10:48.826 )") 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78254 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.826 { 00:10:48.826 "params": { 00:10:48.826 "name": "Nvme$subsystem", 00:10:48.826 "trtype": "$TEST_TRANSPORT", 00:10:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.826 "adrfam": "ipv4", 00:10:48.826 "trsvcid": "$NVMF_PORT", 00:10:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.826 "hdgst": ${hdgst:-false}, 00:10:48.826 "ddgst": ${ddgst:-false} 00:10:48.826 }, 00:10:48.826 "method": "bdev_nvme_attach_controller" 00:10:48.826 } 00:10:48.826 EOF 00:10:48.826 )") 00:10:48.826 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78259 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.827 "params": { 00:10:48.827 "name": "Nvme1", 00:10:48.827 "trtype": "tcp", 00:10:48.827 "traddr": "10.0.0.3", 00:10:48.827 "adrfam": "ipv4", 00:10:48.827 "trsvcid": "4420", 00:10:48.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.827 "hdgst": false, 00:10:48.827 "ddgst": false 00:10:48.827 }, 00:10:48.827 "method": "bdev_nvme_attach_controller" 00:10:48.827 }' 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.827 { 00:10:48.827 "params": { 00:10:48.827 "name": "Nvme$subsystem", 00:10:48.827 "trtype": "$TEST_TRANSPORT", 00:10:48.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.827 "adrfam": "ipv4", 00:10:48.827 "trsvcid": "$NVMF_PORT", 00:10:48.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.827 "hdgst": ${hdgst:-false}, 00:10:48.827 "ddgst": ${ddgst:-false} 00:10:48.827 }, 00:10:48.827 "method": "bdev_nvme_attach_controller" 00:10:48.827 } 00:10:48.827 EOF 00:10:48.827 )") 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.827 "params": { 00:10:48.827 "name": "Nvme1", 00:10:48.827 "trtype": "tcp", 00:10:48.827 "traddr": "10.0.0.3", 00:10:48.827 "adrfam": "ipv4", 00:10:48.827 "trsvcid": "4420", 00:10:48.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.827 "hdgst": false, 00:10:48.827 "ddgst": false 00:10:48.827 }, 00:10:48.827 "method": "bdev_nvme_attach_controller" 00:10:48.827 }' 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.827 "params": { 00:10:48.827 "name": "Nvme1", 00:10:48.827 "trtype": "tcp", 00:10:48.827 "traddr": "10.0.0.3", 00:10:48.827 "adrfam": "ipv4", 00:10:48.827 "trsvcid": "4420", 00:10:48.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.827 "hdgst": false, 00:10:48.827 "ddgst": false 00:10:48.827 }, 00:10:48.827 "method": "bdev_nvme_attach_controller" 00:10:48.827 }' 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.827 "params": { 00:10:48.827 "name": "Nvme1", 00:10:48.827 "trtype": "tcp", 00:10:48.827 "traddr": "10.0.0.3", 00:10:48.827 "adrfam": "ipv4", 00:10:48.827 "trsvcid": "4420", 00:10:48.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.827 "hdgst": false, 00:10:48.827 "ddgst": false 00:10:48.827 }, 00:10:48.827 "method": "bdev_nvme_attach_controller" 00:10:48.827 }' 00:10:48.827 [2024-11-28 11:41:18.801515] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.827 [2024-11-28 11:41:18.801614] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:48.827 11:41:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78250 00:10:48.827 [2024-11-28 11:41:18.823574] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.827 [2024-11-28 11:41:18.823677] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:48.827 [2024-11-28 11:41:18.833084] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.827 [2024-11-28 11:41:18.833176] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:48.827 [2024-11-28 11:41:18.837357] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:48.827 [2024-11-28 11:41:18.837578] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:49.103 [2024-11-28 11:41:18.998607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.103 [2024-11-28 11:41:19.029765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.103 [2024-11-28 11:41:19.070346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:49.103 [2024-11-28 11:41:19.070980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.103 [2024-11-28 11:41:19.084471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.103 [2024-11-28 11:41:19.102323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.103 [2024-11-28 11:41:19.140736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.103 [2024-11-28 11:41:19.144410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:49.103 [2024-11-28 11:41:19.158412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.103 [2024-11-28 11:41:19.177049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.103 [2024-11-28 11:41:19.214729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.103 [2024-11-28 11:41:19.217530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:49.362 [2024-11-28 11:41:19.231494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.362 Running I/O for 1 seconds... 00:10:49.362 [2024-11-28 11:41:19.245606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.362 Running I/O for 1 seconds... 00:10:49.362 [2024-11-28 11:41:19.284978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:49.362 [2024-11-28 11:41:19.298829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.362 Running I/O for 1 seconds... 00:10:49.362 Running I/O for 1 seconds... 00:10:50.298 6661.00 IOPS, 26.02 MiB/s 00:10:50.298 Latency(us) 00:10:50.299 [2024-11-28T11:41:20.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.299 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:50.299 Nvme1n1 : 1.03 6598.83 25.78 0.00 0.00 19117.17 5481.19 34078.72 00:10:50.299 [2024-11-28T11:41:20.425Z] =================================================================================================================== 00:10:50.299 [2024-11-28T11:41:20.425Z] Total : 6598.83 25.78 0.00 0.00 19117.17 5481.19 34078.72 00:10:50.299 162920.00 IOPS, 636.41 MiB/s 00:10:50.299 Latency(us) 00:10:50.299 [2024-11-28T11:41:20.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.299 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:50.299 Nvme1n1 : 1.00 162501.27 634.77 0.00 0.00 783.16 484.07 2532.07 00:10:50.299 [2024-11-28T11:41:20.425Z] =================================================================================================================== 00:10:50.299 [2024-11-28T11:41:20.425Z] Total : 162501.27 634.77 0.00 0.00 783.16 484.07 2532.07 00:10:50.299 7462.00 IOPS, 29.15 MiB/s 00:10:50.299 Latency(us) 00:10:50.299 [2024-11-28T11:41:20.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.299 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:50.299 Nvme1n1 : 1.01 7500.89 29.30 0.00 0.00 16956.45 8221.79 25618.62 00:10:50.299 [2024-11-28T11:41:20.425Z] =================================================================================================================== 00:10:50.299 [2024-11-28T11:41:20.425Z] Total : 7500.89 29.30 0.00 0.00 16956.45 8221.79 25618.62 00:10:50.299 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78252 00:10:50.558 6600.00 IOPS, 25.78 MiB/s 00:10:50.558 Latency(us) 00:10:50.558 [2024-11-28T11:41:20.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.558 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:50.558 Nvme1n1 : 1.01 6731.07 26.29 0.00 0.00 18954.72 5362.04 46709.29 00:10:50.558 [2024-11-28T11:41:20.684Z] =================================================================================================================== 00:10:50.558 [2024-11-28T11:41:20.684Z] Total : 6731.07 26.29 0.00 0.00 18954.72 5362.04 46709.29 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78254 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78259 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.558 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.558 rmmod nvme_tcp 00:10:50.558 rmmod nvme_fabrics 00:10:50.558 rmmod nvme_keyring 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 78222 ']' 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 78222 ']' 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.817 killing process with pid 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78222' 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 78222 00:10:50.817 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:50.818 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:51.078 11:41:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:51.078 00:10:51.078 real 0m3.581s 00:10:51.078 user 0m14.329s 00:10:51.078 sys 0m2.235s 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.078 ************************************ 00:10:51.078 END TEST nvmf_bdev_io_wait 00:10:51.078 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.078 ************************************ 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.338 ************************************ 00:10:51.338 START TEST nvmf_queue_depth 00:10:51.338 ************************************ 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.338 * Looking for test storage... 00:10:51.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.338 --rc genhtml_branch_coverage=1 00:10:51.338 --rc genhtml_function_coverage=1 00:10:51.338 --rc genhtml_legend=1 00:10:51.338 --rc geninfo_all_blocks=1 00:10:51.338 --rc geninfo_unexecuted_blocks=1 00:10:51.338 00:10:51.338 ' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.338 --rc genhtml_branch_coverage=1 00:10:51.338 --rc genhtml_function_coverage=1 00:10:51.338 --rc genhtml_legend=1 00:10:51.338 --rc geninfo_all_blocks=1 00:10:51.338 --rc geninfo_unexecuted_blocks=1 00:10:51.338 00:10:51.338 ' 00:10:51.338 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.339 --rc genhtml_branch_coverage=1 00:10:51.339 --rc genhtml_function_coverage=1 00:10:51.339 --rc genhtml_legend=1 00:10:51.339 --rc geninfo_all_blocks=1 00:10:51.339 --rc geninfo_unexecuted_blocks=1 00:10:51.339 00:10:51.339 ' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.339 --rc genhtml_branch_coverage=1 00:10:51.339 --rc genhtml_function_coverage=1 00:10:51.339 --rc genhtml_legend=1 00:10:51.339 --rc geninfo_all_blocks=1 00:10:51.339 --rc geninfo_unexecuted_blocks=1 00:10:51.339 00:10:51.339 ' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.339 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.339 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.340 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.340 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.340 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:51.598 Cannot find device "nvmf_init_br" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:51.598 Cannot find device "nvmf_init_br2" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:51.598 Cannot find device "nvmf_tgt_br" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.598 Cannot find device "nvmf_tgt_br2" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:51.598 Cannot find device "nvmf_init_br" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:51.598 Cannot find device "nvmf_init_br2" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:51.598 Cannot find device "nvmf_tgt_br" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:51.598 Cannot find device "nvmf_tgt_br2" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:51.598 Cannot find device "nvmf_br" 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:51.598 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:51.599 Cannot find device "nvmf_init_if" 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:51.599 Cannot find device "nvmf_init_if2" 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:51.599 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:51.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:10:51.859 00:10:51.859 --- 10.0.0.3 ping statistics --- 00:10:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.859 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:51.859 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:51.859 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:10:51.859 00:10:51.859 --- 10.0.0.4 ping statistics --- 00:10:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.859 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:51.859 00:10:51.859 --- 10.0.0.1 ping statistics --- 00:10:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.859 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:51.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:51.859 00:10:51.859 --- 10.0.0.2 ping statistics --- 00:10:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.859 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.859 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=78516 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 78516 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78516 ']' 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.860 11:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.860 [2024-11-28 11:41:21.943220] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:51.860 [2024-11-28 11:41:21.943337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.120 [2024-11-28 11:41:22.076633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:52.120 [2024-11-28 11:41:22.103102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.120 [2024-11-28 11:41:22.145266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.120 [2024-11-28 11:41:22.145344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.120 [2024-11-28 11:41:22.145371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.120 [2024-11-28 11:41:22.145379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.120 [2024-11-28 11:41:22.145386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.120 [2024-11-28 11:41:22.145803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.120 [2024-11-28 11:41:22.202348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.379 [2024-11-28 11:41:22.320401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.379 Malloc0 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.379 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.380 [2024-11-28 11:41:22.371778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78540 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78540 /var/tmp/bdevperf.sock 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78540 ']' 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.380 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.380 [2024-11-28 11:41:22.430208] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:52.380 [2024-11-28 11:41:22.430314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78540 ] 00:10:52.640 [2024-11-28 11:41:22.552650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:52.640 [2024-11-28 11:41:22.584605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.640 [2024-11-28 11:41:22.640125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.640 [2024-11-28 11:41:22.700246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.900 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.901 NVMe0n1 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.901 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:52.901 Running I/O for 10 seconds... 00:10:55.231 6196.00 IOPS, 24.20 MiB/s [2024-11-28T11:41:26.293Z] 6763.50 IOPS, 26.42 MiB/s [2024-11-28T11:41:27.230Z] 6955.00 IOPS, 27.17 MiB/s [2024-11-28T11:41:28.168Z] 7008.25 IOPS, 27.38 MiB/s [2024-11-28T11:41:29.104Z] 7096.20 IOPS, 27.72 MiB/s [2024-11-28T11:41:30.041Z] 7170.33 IOPS, 28.01 MiB/s [2024-11-28T11:41:31.419Z] 7234.00 IOPS, 28.26 MiB/s [2024-11-28T11:41:32.354Z] 7292.25 IOPS, 28.49 MiB/s [2024-11-28T11:41:33.289Z] 7310.11 IOPS, 28.56 MiB/s [2024-11-28T11:41:33.289Z] 7390.10 IOPS, 28.87 MiB/s 00:11:03.163 Latency(us) 00:11:03.163 [2024-11-28T11:41:33.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.163 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:03.163 Verification LBA range: start 0x0 length 0x4000 00:11:03.163 NVMe0n1 : 10.09 7423.91 29.00 0.00 0.00 137266.98 28597.53 96278.34 00:11:03.163 [2024-11-28T11:41:33.289Z] =================================================================================================================== 00:11:03.163 [2024-11-28T11:41:33.289Z] Total : 7423.91 29.00 0.00 0.00 137266.98 28597.53 96278.34 00:11:03.163 { 00:11:03.163 "results": [ 00:11:03.163 { 00:11:03.163 "job": "NVMe0n1", 00:11:03.163 "core_mask": "0x1", 00:11:03.163 "workload": "verify", 00:11:03.163 "status": "finished", 00:11:03.163 "verify_range": { 00:11:03.163 "start": 0, 00:11:03.163 "length": 16384 00:11:03.163 }, 00:11:03.163 "queue_depth": 1024, 00:11:03.163 "io_size": 4096, 00:11:03.163 "runtime": 10.088215, 00:11:03.163 "iops": 7423.909978127945, 00:11:03.163 "mibps": 28.999648352062284, 00:11:03.163 "io_failed": 0, 00:11:03.163 "io_timeout": 0, 00:11:03.163 "avg_latency_us": 137266.98459349817, 00:11:03.163 "min_latency_us": 28597.52727272727, 00:11:03.163 "max_latency_us": 96278.34181818181 00:11:03.163 } 00:11:03.164 ], 00:11:03.164 "core_count": 1 00:11:03.164 } 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78540 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78540 ']' 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78540 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78540 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78540' 00:11:03.164 killing process with pid 78540 00:11:03.164 Received shutdown signal, test time was about 10.000000 seconds 00:11:03.164 00:11:03.164 Latency(us) 00:11:03.164 [2024-11-28T11:41:33.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.164 [2024-11-28T11:41:33.290Z] =================================================================================================================== 00:11:03.164 [2024-11-28T11:41:33.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78540 00:11:03.164 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78540 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.423 rmmod nvme_tcp 00:11:03.423 rmmod nvme_fabrics 00:11:03.423 rmmod nvme_keyring 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 78516 ']' 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 78516 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78516 ']' 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78516 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78516 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:03.423 killing process with pid 78516 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:03.423 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78516' 00:11:03.424 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78516 00:11:03.424 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78516 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:03.683 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:03.943 00:11:03.943 real 0m12.743s 00:11:03.943 user 0m21.489s 00:11:03.943 sys 0m2.369s 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.943 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:03.943 ************************************ 00:11:03.943 END TEST nvmf_queue_depth 00:11:03.943 ************************************ 00:11:03.943 11:41:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:03.943 11:41:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.943 11:41:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.943 11:41:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.943 ************************************ 00:11:03.943 START TEST nvmf_target_multipath 00:11:03.943 ************************************ 00:11:03.943 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:04.203 * Looking for test storage... 00:11:04.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.203 --rc genhtml_branch_coverage=1 00:11:04.203 --rc genhtml_function_coverage=1 00:11:04.203 --rc genhtml_legend=1 00:11:04.203 --rc geninfo_all_blocks=1 00:11:04.203 --rc geninfo_unexecuted_blocks=1 00:11:04.203 00:11:04.203 ' 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.203 --rc genhtml_branch_coverage=1 00:11:04.203 --rc genhtml_function_coverage=1 00:11:04.203 --rc genhtml_legend=1 00:11:04.203 --rc geninfo_all_blocks=1 00:11:04.203 --rc geninfo_unexecuted_blocks=1 00:11:04.203 00:11:04.203 ' 00:11:04.203 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.203 --rc genhtml_branch_coverage=1 00:11:04.203 --rc genhtml_function_coverage=1 00:11:04.203 --rc genhtml_legend=1 00:11:04.203 --rc geninfo_all_blocks=1 00:11:04.204 --rc geninfo_unexecuted_blocks=1 00:11:04.204 00:11:04.204 ' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.204 --rc genhtml_branch_coverage=1 00:11:04.204 --rc genhtml_function_coverage=1 00:11:04.204 --rc genhtml_legend=1 00:11:04.204 --rc geninfo_all_blocks=1 00:11:04.204 --rc geninfo_unexecuted_blocks=1 00:11:04.204 00:11:04.204 ' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.204 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:04.204 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:04.205 Cannot find device "nvmf_init_br" 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:04.205 Cannot find device "nvmf_init_br2" 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:04.205 Cannot find device "nvmf_tgt_br" 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.205 Cannot find device "nvmf_tgt_br2" 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:04.205 Cannot find device "nvmf_init_br" 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:04.205 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:04.464 Cannot find device "nvmf_init_br2" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:04.464 Cannot find device "nvmf_tgt_br" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:04.464 Cannot find device "nvmf_tgt_br2" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:04.464 Cannot find device "nvmf_br" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:04.464 Cannot find device "nvmf_init_if" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:04.464 Cannot find device "nvmf_init_if2" 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.464 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.465 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:11:04.724 00:11:04.724 --- 10.0.0.3 ping statistics --- 00:11:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.724 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:04.724 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:04.724 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:11:04.724 00:11:04.724 --- 10.0.0.4 ping statistics --- 00:11:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.724 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:04.724 00:11:04.724 --- 10.0.0.1 ping statistics --- 00:11:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.724 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:04.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:04.724 00:11:04.724 --- 10.0.0.2 ping statistics --- 00:11:04.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.724 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=78904 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 78904 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.724 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 78904 ']' 00:11:04.725 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.725 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.725 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.725 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.725 11:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.725 [2024-11-28 11:41:34.734465] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:04.725 [2024-11-28 11:41:34.734566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.983 [2024-11-28 11:41:34.863556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:04.983 [2024-11-28 11:41:34.911277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.983 [2024-11-28 11:41:34.967947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.983 [2024-11-28 11:41:34.968009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.983 [2024-11-28 11:41:34.968028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.983 [2024-11-28 11:41:34.968038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.983 [2024-11-28 11:41:34.968047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.983 [2024-11-28 11:41:34.969363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.983 [2024-11-28 11:41:34.969440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.983 [2024-11-28 11:41:34.969552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.983 [2024-11-28 11:41:34.969560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.983 [2024-11-28 11:41:35.029869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.916 11:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.916 [2024-11-28 11:41:36.037769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.176 11:41:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:06.434 Malloc0 00:11:06.434 11:41:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:06.691 11:41:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.949 11:41:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:07.207 [2024-11-28 11:41:37.176747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.207 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:07.465 [2024-11-28 11:41:37.432984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:07.465 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:07.465 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:07.723 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.723 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.723 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.723 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.723 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=78999 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:09.649 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:09.649 [global] 00:11:09.649 thread=1 00:11:09.649 invalidate=1 00:11:09.649 rw=randrw 00:11:09.649 time_based=1 00:11:09.649 runtime=6 00:11:09.649 ioengine=libaio 00:11:09.649 direct=1 00:11:09.649 bs=4096 00:11:09.649 iodepth=128 00:11:09.649 norandommap=0 00:11:09.649 numjobs=1 00:11:09.649 00:11:09.649 verify_dump=1 00:11:09.649 verify_backlog=512 00:11:09.649 verify_state_save=0 00:11:09.649 do_verify=1 00:11:09.649 verify=crc32c-intel 00:11:09.649 [job0] 00:11:09.649 filename=/dev/nvme0n1 00:11:09.908 Could not set queue depth (nvme0n1) 00:11:09.908 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.908 fio-3.35 00:11:09.908 Starting 1 thread 00:11:10.844 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:11.103 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:11.362 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:11.620 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:11.878 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:11.879 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.879 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:11.879 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:11.879 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:11.879 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 78999 00:11:16.076 00:11:16.076 job0: (groupid=0, jobs=1): err= 0: pid=79020: Thu Nov 28 11:41:46 2024 00:11:16.076 read: IOPS=9112, BW=35.6MiB/s (37.3MB/s)(214MiB/6008msec) 00:11:16.076 slat (usec): min=7, max=7874, avg=66.18, stdev=259.33 00:11:16.076 clat (usec): min=975, max=19018, avg=9705.57, stdev=1852.71 00:11:16.076 lat (usec): min=1053, max=19053, avg=9771.74, stdev=1859.01 00:11:16.076 clat percentiles (usec): 00:11:16.076 | 1.00th=[ 4752], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8455], 00:11:16.076 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:16.076 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11731], 95.00th=[13435], 00:11:16.076 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16712], 99.95th=[17695], 00:11:16.076 | 99.99th=[18744] 00:11:16.076 bw ( KiB/s): min= 2160, max=26904, per=50.59%, avg=18442.45, stdev=7021.89, samples=11 00:11:16.076 iops : min= 540, max= 6726, avg=4610.55, stdev=1755.46, samples=11 00:11:16.076 write: IOPS=5304, BW=20.7MiB/s (21.7MB/s)(108MiB/5223msec); 0 zone resets 00:11:16.076 slat (usec): min=15, max=1893, avg=73.97, stdev=188.54 00:11:16.076 clat (usec): min=1066, max=18112, avg=8320.62, stdev=1739.38 00:11:16.076 lat (usec): min=1139, max=18139, avg=8394.59, stdev=1747.23 00:11:16.076 clat percentiles (usec): 00:11:16.076 | 1.00th=[ 3589], 5.00th=[ 4752], 10.00th=[ 6063], 20.00th=[ 7373], 00:11:16.076 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:11:16.076 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[10814], 00:11:16.076 | 99.00th=[13173], 99.50th=[14222], 99.90th=[15664], 99.95th=[16712], 00:11:16.076 | 99.99th=[17957] 00:11:16.076 bw ( KiB/s): min= 2376, max=26640, per=86.89%, avg=18438.91, stdev=6921.74, samples=11 00:11:16.076 iops : min= 594, max= 6660, avg=4609.64, stdev=1730.40, samples=11 00:11:16.076 lat (usec) : 1000=0.01% 00:11:16.076 lat (msec) : 2=0.01%, 4=0.96%, 10=70.35%, 20=28.68% 00:11:16.076 cpu : usr=5.48%, sys=19.73%, ctx=4996, majf=0, minf=90 00:11:16.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:16.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.076 issued rwts: total=54750,27708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.076 00:11:16.076 Run status group 0 (all jobs): 00:11:16.076 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=214MiB (224MB), run=6008-6008msec 00:11:16.076 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=108MiB (113MB), run=5223-5223msec 00:11:16.076 00:11:16.076 Disk stats (read/write): 00:11:16.076 nvme0n1: ios=54092/27120, merge=0/0, ticks=503433/211398, in_queue=714831, util=98.63% 00:11:16.077 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:16.335 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.594 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79102 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:16.595 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:16.595 [global] 00:11:16.595 thread=1 00:11:16.595 invalidate=1 00:11:16.595 rw=randrw 00:11:16.595 time_based=1 00:11:16.595 runtime=6 00:11:16.595 ioengine=libaio 00:11:16.595 direct=1 00:11:16.595 bs=4096 00:11:16.595 iodepth=128 00:11:16.595 norandommap=0 00:11:16.595 numjobs=1 00:11:16.595 00:11:16.595 verify_dump=1 00:11:16.595 verify_backlog=512 00:11:16.595 verify_state_save=0 00:11:16.595 do_verify=1 00:11:16.595 verify=crc32c-intel 00:11:16.595 [job0] 00:11:16.595 filename=/dev/nvme0n1 00:11:16.854 Could not set queue depth (nvme0n1) 00:11:16.854 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.854 fio-3.35 00:11:16.854 Starting 1 thread 00:11:17.809 11:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:18.067 11:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:18.326 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:18.585 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:18.843 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79102 00:11:23.032 00:11:23.032 job0: (groupid=0, jobs=1): err= 0: pid=79127: Thu Nov 28 11:41:53 2024 00:11:23.032 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(263MiB/6002msec) 00:11:23.032 slat (usec): min=4, max=6006, avg=44.56, stdev=194.57 00:11:23.032 clat (usec): min=317, max=17470, avg=7863.50, stdev=2085.94 00:11:23.032 lat (usec): min=332, max=17477, avg=7908.06, stdev=2100.05 00:11:23.032 clat percentiles (usec): 00:11:23.032 | 1.00th=[ 2704], 5.00th=[ 3818], 10.00th=[ 4817], 20.00th=[ 6456], 00:11:23.032 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8455], 00:11:23.032 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11731], 00:11:23.032 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14484], 99.95th=[15008], 00:11:23.032 | 99.99th=[16909] 00:11:23.032 bw ( KiB/s): min= 9608, max=41416, per=52.35%, avg=23455.27, stdev=9219.36, samples=11 00:11:23.032 iops : min= 2402, max=10354, avg=5863.82, stdev=2304.84, samples=11 00:11:23.032 write: IOPS=6704, BW=26.2MiB/s (27.5MB/s)(137MiB/5230msec); 0 zone resets 00:11:23.032 slat (usec): min=11, max=8935, avg=53.82, stdev=145.48 00:11:23.032 clat (usec): min=820, max=15876, avg=6658.64, stdev=1838.60 00:11:23.032 lat (usec): min=878, max=15894, avg=6712.46, stdev=1852.74 00:11:23.032 clat percentiles (usec): 00:11:23.032 | 1.00th=[ 2474], 5.00th=[ 3261], 10.00th=[ 3818], 20.00th=[ 4817], 00:11:23.032 | 30.00th=[ 5800], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7570], 00:11:23.032 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:11:23.032 | 99.00th=[11469], 99.50th=[12125], 99.90th=[13698], 99.95th=[14484], 00:11:23.032 | 99.99th=[15795] 00:11:23.032 bw ( KiB/s): min= 9736, max=40552, per=87.60%, avg=23493.82, stdev=9089.59, samples=11 00:11:23.032 iops : min= 2434, max=10138, avg=5873.45, stdev=2272.40, samples=11 00:11:23.032 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06% 00:11:23.032 lat (msec) : 2=0.35%, 4=7.41%, 10=86.99%, 20=5.15% 00:11:23.032 cpu : usr=5.83%, sys=22.18%, ctx=6058, majf=0, minf=183 00:11:23.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:23.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.033 issued rwts: total=67227,35067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.033 00:11:23.033 Run status group 0 (all jobs): 00:11:23.033 READ: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=263MiB (275MB), run=6002-6002msec 00:11:23.033 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=137MiB (144MB), run=5230-5230msec 00:11:23.033 00:11:23.033 Disk stats (read/write): 00:11:23.033 nvme0n1: ios=66573/34363, merge=0/0, ticks=501780/214101, in_queue=715881, util=98.68% 00:11:23.033 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:23.293 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.552 rmmod nvme_tcp 00:11:23.552 rmmod nvme_fabrics 00:11:23.552 rmmod nvme_keyring 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 78904 ']' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 78904 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 78904 ']' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 78904 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78904 00:11:23.552 killing process with pid 78904 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78904' 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 78904 00:11:23.552 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 78904 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:23.811 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:24.069 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:24.069 00:11:24.069 real 0m20.072s 00:11:24.069 user 1m14.557s 00:11:24.069 sys 0m9.695s 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:24.069 ************************************ 00:11:24.069 END TEST nvmf_target_multipath 00:11:24.069 ************************************ 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.069 ************************************ 00:11:24.069 START TEST nvmf_zcopy 00:11:24.069 ************************************ 00:11:24.069 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:24.328 * Looking for test storage... 00:11:24.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.328 --rc genhtml_branch_coverage=1 00:11:24.328 --rc genhtml_function_coverage=1 00:11:24.328 --rc genhtml_legend=1 00:11:24.328 --rc geninfo_all_blocks=1 00:11:24.328 --rc geninfo_unexecuted_blocks=1 00:11:24.328 00:11:24.328 ' 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.328 --rc genhtml_branch_coverage=1 00:11:24.328 --rc genhtml_function_coverage=1 00:11:24.328 --rc genhtml_legend=1 00:11:24.328 --rc geninfo_all_blocks=1 00:11:24.328 --rc geninfo_unexecuted_blocks=1 00:11:24.328 00:11:24.328 ' 00:11:24.328 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.328 --rc genhtml_branch_coverage=1 00:11:24.328 --rc genhtml_function_coverage=1 00:11:24.328 --rc genhtml_legend=1 00:11:24.328 --rc geninfo_all_blocks=1 00:11:24.328 --rc geninfo_unexecuted_blocks=1 00:11:24.328 00:11:24.329 ' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.329 --rc genhtml_branch_coverage=1 00:11:24.329 --rc genhtml_function_coverage=1 00:11:24.329 --rc genhtml_legend=1 00:11:24.329 --rc geninfo_all_blocks=1 00:11:24.329 --rc geninfo_unexecuted_blocks=1 00:11:24.329 00:11:24.329 ' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:24.329 Cannot find device "nvmf_init_br" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:24.329 Cannot find device "nvmf_init_br2" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:24.329 Cannot find device "nvmf_tgt_br" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.329 Cannot find device "nvmf_tgt_br2" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:24.329 Cannot find device "nvmf_init_br" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:24.329 Cannot find device "nvmf_init_br2" 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:24.329 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:24.587 Cannot find device "nvmf_tgt_br" 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:24.587 Cannot find device "nvmf_tgt_br2" 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:24.587 Cannot find device "nvmf_br" 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:24.587 Cannot find device "nvmf_init_if" 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:24.587 Cannot find device "nvmf_init_if2" 00:11:24.587 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.588 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:24.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:11:24.846 00:11:24.846 --- 10.0.0.3 ping statistics --- 00:11:24.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.846 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:24.846 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:24.846 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:11:24.846 00:11:24.846 --- 10.0.0.4 ping statistics --- 00:11:24.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.846 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:24.846 00:11:24.846 --- 10.0.0.1 ping statistics --- 00:11:24.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.846 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:24.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:11:24.846 00:11:24.846 --- 10.0.0.2 ping statistics --- 00:11:24.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.846 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=79429 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 79429 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 79429 ']' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:24.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.846 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.846 [2024-11-28 11:41:54.847101] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:24.846 [2024-11-28 11:41:54.847202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.114 [2024-11-28 11:41:54.979333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:25.114 [2024-11-28 11:41:55.001142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.114 [2024-11-28 11:41:55.044974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.114 [2024-11-28 11:41:55.045305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.114 [2024-11-28 11:41:55.045448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.114 [2024-11-28 11:41:55.045577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.114 [2024-11-28 11:41:55.045612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.114 [2024-11-28 11:41:55.046072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.114 [2024-11-28 11:41:55.102933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.114 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.115 [2024-11-28 11:41:55.219077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.115 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.115 [2024-11-28 11:41:55.235184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.376 malloc0 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:25.376 { 00:11:25.376 "params": { 00:11:25.376 "name": "Nvme$subsystem", 00:11:25.376 "trtype": "$TEST_TRANSPORT", 00:11:25.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.376 "adrfam": "ipv4", 00:11:25.376 "trsvcid": "$NVMF_PORT", 00:11:25.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.376 "hdgst": ${hdgst:-false}, 00:11:25.376 "ddgst": ${ddgst:-false} 00:11:25.376 }, 00:11:25.376 "method": "bdev_nvme_attach_controller" 00:11:25.376 } 00:11:25.376 EOF 00:11:25.376 )") 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:25.376 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:25.376 "params": { 00:11:25.376 "name": "Nvme1", 00:11:25.376 "trtype": "tcp", 00:11:25.376 "traddr": "10.0.0.3", 00:11:25.376 "adrfam": "ipv4", 00:11:25.376 "trsvcid": "4420", 00:11:25.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.376 "hdgst": false, 00:11:25.376 "ddgst": false 00:11:25.376 }, 00:11:25.376 "method": "bdev_nvme_attach_controller" 00:11:25.376 }' 00:11:25.376 [2024-11-28 11:41:55.333147] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:25.376 [2024-11-28 11:41:55.333241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79449 ] 00:11:25.376 [2024-11-28 11:41:55.460719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:25.376 [2024-11-28 11:41:55.491279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.634 [2024-11-28 11:41:55.544076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.634 [2024-11-28 11:41:55.611672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.634 Running I/O for 10 seconds... 00:11:27.943 5622.00 IOPS, 43.92 MiB/s [2024-11-28T11:41:59.004Z] 5713.00 IOPS, 44.63 MiB/s [2024-11-28T11:41:59.939Z] 5689.67 IOPS, 44.45 MiB/s [2024-11-28T11:42:00.877Z] 5683.00 IOPS, 44.40 MiB/s [2024-11-28T11:42:01.813Z] 5683.20 IOPS, 44.40 MiB/s [2024-11-28T11:42:02.748Z] 5693.17 IOPS, 44.48 MiB/s [2024-11-28T11:42:04.127Z] 5719.00 IOPS, 44.68 MiB/s [2024-11-28T11:42:05.064Z] 5737.12 IOPS, 44.82 MiB/s [2024-11-28T11:42:06.001Z] 5721.44 IOPS, 44.70 MiB/s [2024-11-28T11:42:06.001Z] 5699.10 IOPS, 44.52 MiB/s 00:11:35.875 Latency(us) 00:11:35.875 [2024-11-28T11:42:06.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:35.875 Verification LBA range: start 0x0 length 0x1000 00:11:35.875 Nvme1n1 : 10.02 5701.40 44.54 0.00 0.00 22378.72 2100.13 32648.84 00:11:35.875 [2024-11-28T11:42:06.001Z] =================================================================================================================== 00:11:35.875 [2024-11-28T11:42:06.001Z] Total : 5701.40 44.54 0.00 0.00 22378.72 2100.13 32648.84 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79572 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.875 { 00:11:35.875 "params": { 00:11:35.875 "name": "Nvme$subsystem", 00:11:35.875 "trtype": "$TEST_TRANSPORT", 00:11:35.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.875 "adrfam": "ipv4", 00:11:35.875 "trsvcid": "$NVMF_PORT", 00:11:35.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.875 "hdgst": ${hdgst:-false}, 00:11:35.875 "ddgst": ${ddgst:-false} 00:11:35.875 }, 00:11:35.875 "method": "bdev_nvme_attach_controller" 00:11:35.875 } 00:11:35.875 EOF 00:11:35.875 )") 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:35.875 [2024-11-28 11:42:05.962824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.875 [2024-11-28 11:42:05.962894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:35.875 11:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.875 "params": { 00:11:35.875 "name": "Nvme1", 00:11:35.875 "trtype": "tcp", 00:11:35.875 "traddr": "10.0.0.3", 00:11:35.875 "adrfam": "ipv4", 00:11:35.875 "trsvcid": "4420", 00:11:35.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.875 "hdgst": false, 00:11:35.875 "ddgst": false 00:11:35.875 }, 00:11:35.875 "method": "bdev_nvme_attach_controller" 00:11:35.875 }' 00:11:35.875 [2024-11-28 11:42:05.974799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.875 [2024-11-28 11:42:05.974843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.875 [2024-11-28 11:42:05.986823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.875 [2024-11-28 11:42:05.986867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.875 [2024-11-28 11:42:05.998785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.875 [2024-11-28 11:42:05.998835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.010797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.010858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.016860] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:36.135 [2024-11-28 11:42:06.016944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79572 ] 00:11:36.135 [2024-11-28 11:42:06.022796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.022840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.034818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.034862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.046819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.046864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.058815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.058858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.070849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.070891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.082861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.082915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.094859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.094904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.106861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.106911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.118861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.118903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.130873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.130926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.142883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.142934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.144539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:36.135 [2024-11-28 11:42:06.154868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.154910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.166882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.166926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.176655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.135 [2024-11-28 11:42:06.178885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.178931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.190928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.190978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.202927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.202968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.210887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.210928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.222887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.222940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.234891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.234945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.236157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.135 [2024-11-28 11:42:06.246949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.247007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.135 [2024-11-28 11:42:06.258941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.135 [2024-11-28 11:42:06.258991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.270941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.270987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.282937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.282997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.294926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.294976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.306940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.307012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.314817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.394 [2024-11-28 11:42:06.318941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.318982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.330976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.331078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.342937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.342998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.354939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.354986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.366947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.366992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.378978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.379022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.390973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.391020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.403007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.403063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.415023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.415085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.427029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.427078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 Running I/O for 5 seconds... 00:11:36.394 [2024-11-28 11:42:06.439039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.439083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.458026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.458104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.473554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.473594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.489703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.489749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.500092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.500138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.394 [2024-11-28 11:42:06.516839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.394 [2024-11-28 11:42:06.516885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.531890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.531930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.548077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.548123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.564063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.564109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.579640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.579687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.593886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.593932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.609140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.609187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.624465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.624510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.640410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.640466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.651044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.651092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.667273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.667331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.682127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.682186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.698178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.653 [2024-11-28 11:42:06.698220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.653 [2024-11-28 11:42:06.709294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.654 [2024-11-28 11:42:06.709361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.654 [2024-11-28 11:42:06.724585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.654 [2024-11-28 11:42:06.724638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.654 [2024-11-28 11:42:06.739558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.654 [2024-11-28 11:42:06.739599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.654 [2024-11-28 11:42:06.754952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.654 [2024-11-28 11:42:06.754995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.654 [2024-11-28 11:42:06.769851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.654 [2024-11-28 11:42:06.769909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.785000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.785073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.800531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.800579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.816558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.816601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.833445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.833488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.850562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.850608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.864605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.864666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.879524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.879566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.894643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.894691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.909969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.910011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.919660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.919700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.936093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.936147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.951577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.951614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.967292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.967340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:06.984485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:06.984539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:07.001663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:07.001702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:07.016965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:07.017016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.932 [2024-11-28 11:42:07.027199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.932 [2024-11-28 11:42:07.027236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.043931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.043969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.058796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.058839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.074880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.074918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.091587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.091626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.108857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.108894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.125184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.125221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.142214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.142266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.157001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.157038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.173292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.173361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.190264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.190340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.206146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.206192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.223277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.223327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.239937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.239974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.255230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.255268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.270505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.270537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.280804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.280841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.295996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.296077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.197 [2024-11-28 11:42:07.313923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.197 [2024-11-28 11:42:07.313961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.328908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.328946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.344554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.344614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.353728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.353776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.370231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.370270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.387265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.387320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.404406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.404450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.420757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.420808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.437168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.437217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 10594.00 IOPS, 82.77 MiB/s [2024-11-28T11:42:07.581Z] [2024-11-28 11:42:07.454059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.454111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.469690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.469737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.480085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.480138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.495841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.495916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.510700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.510755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.525082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.525145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.455 [2024-11-28 11:42:07.540828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.455 [2024-11-28 11:42:07.540890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.456 [2024-11-28 11:42:07.556460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.456 [2024-11-28 11:42:07.556510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.456 [2024-11-28 11:42:07.576000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.456 [2024-11-28 11:42:07.576054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.591966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.592005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.607286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.607337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.618109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.618161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.634103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.634148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.649354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.649422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.659484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.659534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.675564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.675602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.691347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.691384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.701404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.701469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.717338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.717415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.733279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.733339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.750086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.750127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.766365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.766439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.783577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.783632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.799971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.800011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.816241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.816322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.714 [2024-11-28 11:42:07.832477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.714 [2024-11-28 11:42:07.832515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.842559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.842597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.858212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.858263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.875315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.875413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.892061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.892111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.908188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.908311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.924968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.925006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.942177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.942228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.956685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.956749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.973008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.973047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:07.989851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:07.989918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.005788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.005828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.025469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.025511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.040239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.040305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.050194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.050238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.066009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.066090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.972 [2024-11-28 11:42:08.082669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.972 [2024-11-28 11:42:08.082708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.100082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.100123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.115223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.115288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.132024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.132079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.149077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.149133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.165423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.165466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.182477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.182530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.199122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.199179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.215369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.215434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.234757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.234810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.248935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.248986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.265106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.265157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.281710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.281746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.297783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.297821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.307853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.307900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.325155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.325208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.230 [2024-11-28 11:42:08.339504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.230 [2024-11-28 11:42:08.339556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.354728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.354786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.365351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.365415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.380761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.380812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.395688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.395752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.411124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.411163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.421012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.421052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.436428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.436478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 10615.50 IOPS, 82.93 MiB/s [2024-11-28T11:42:08.614Z] [2024-11-28 11:42:08.453442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.453508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.469680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.469718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.488715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.488770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.504285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.504344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.522447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.522485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.538300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.538405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.556968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.557007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.571215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.571254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.587099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.587150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.488 [2024-11-28 11:42:08.597050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.488 [2024-11-28 11:42:08.597101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.613098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.613151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.629026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.629066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.639579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.639628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.655072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.655111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.670681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.670719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.687513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.687566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.705425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.705464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.719324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.719375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.735895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.735939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.746 [2024-11-28 11:42:08.752980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.746 [2024-11-28 11:42:08.753019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.768923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.768978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.786159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.786197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.801543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.801600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.811786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.811839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.828539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.828576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.843255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.843310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.747 [2024-11-28 11:42:08.859587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.747 [2024-11-28 11:42:08.859632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.875911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.875963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.891644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.891683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.902047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.902118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.918124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.918196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.934243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.934283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.951863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.951920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.967627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.967666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:08.984781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:08.984826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.001081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.001120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.017929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.017981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.034087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.034129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.051446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.051496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.068333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.068382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.085245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.085283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.101676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.101746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.117566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.117603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.006 [2024-11-28 11:42:09.127119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.006 [2024-11-28 11:42:09.127157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.143400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.143437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.159711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.159763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.177085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.177137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.193997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.194049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.210466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.210502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.227230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.227310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.244134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.244184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.259129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.259191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.275619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.275655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.292611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.292648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.309014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.309064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.326476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.326513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.342717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.342798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.359800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.359838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.264 [2024-11-28 11:42:09.377291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.264 [2024-11-28 11:42:09.377356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.393531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.393567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.410187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.410224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.427164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.427215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 10724.67 IOPS, 83.79 MiB/s [2024-11-28T11:42:09.650Z] [2024-11-28 11:42:09.443093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.443143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.461728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.461794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.476694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.476745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.486776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.486826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.502199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.502248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.519551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.519600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.535738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.535788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.553145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.553195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.569315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.569393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.586015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.586097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.604131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.604164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.618992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.619051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.524 [2024-11-28 11:42:09.635103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.524 [2024-11-28 11:42:09.635154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.652184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.652234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.668659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.668708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.685158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.685198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.701601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.701651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.719890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.719940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.733323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.733377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.748918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.748968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.757920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.757969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.774487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.774523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.792054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.792104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.806711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.806791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.822862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.822945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.839417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.839454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.857621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.857659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.873001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.873052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.891416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.891454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.783 [2024-11-28 11:42:09.907218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.783 [2024-11-28 11:42:09.907282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:09.923661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:09.923699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:09.942184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:09.942236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:09.957832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:09.957882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:09.975937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:09.975986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:09.990791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:09.990842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:10.007003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:10.007070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:10.023765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:10.023815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:10.042614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:10.042651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.041 [2024-11-28 11:42:10.057170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.041 [2024-11-28 11:42:10.057219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.067559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.067593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.083346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.083413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.093560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.093593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.108243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.108321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.128165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.128214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.142349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.142419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.042 [2024-11-28 11:42:10.158323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.042 [2024-11-28 11:42:10.158367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.174067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.174108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.183710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.183760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.200406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.200465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.215956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.216006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.226068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.226119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.241380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.241414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.258001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.258051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.274813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.274865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.292701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.292765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.308465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.308514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.325240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.325320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.342172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.342224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.357946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.357983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.374860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.374912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.389898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.389951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.406881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.406948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.301 [2024-11-28 11:42:10.421012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.301 [2024-11-28 11:42:10.421064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.436352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.436400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 10854.25 IOPS, 84.80 MiB/s [2024-11-28T11:42:10.686Z] [2024-11-28 11:42:10.453002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.453055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.469012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.469063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.488785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.488835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.503099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.503165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.518804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.518855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.536509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.536557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.552264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.552343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.568953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.569004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.584749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.584798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.593969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.594018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.609522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.609573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.626603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.626640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.643285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.643380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.660817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.660866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.560 [2024-11-28 11:42:10.676253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.560 [2024-11-28 11:42:10.676328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.694214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.694264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.710830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.710868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.725813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.725863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.743345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.743406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.758105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.758154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.767452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.767500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.784680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.784731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.800699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.800749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.810157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.810208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.826218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.826270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.842463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.842500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.852224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.852262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.867190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.867239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.884089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.884139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.900501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.900563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.916264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.916322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.925974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.926023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.819 [2024-11-28 11:42:10.942479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.819 [2024-11-28 11:42:10.942515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:10.957924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:10.957973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:10.968283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:10.968360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:10.983126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:10.983176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:10.994573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:10.994626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.010131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.010182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.020213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.020249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.035792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.035843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.046435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.046471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.061696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.061745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.077225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.077274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.093058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.093110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.111473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.111525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.126433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.126469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.136087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.136138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.150927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.150979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.166297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.166347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.176611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.176660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.078 [2024-11-28 11:42:11.192016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.078 [2024-11-28 11:42:11.192065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.207980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.208031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.226923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.226975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.242035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.242085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.251996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.252045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.266866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.266918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.282543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.282580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.299786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.299839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.315866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.315918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.334374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.334421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.350197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.350238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.366453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.366496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.383611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.383653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.400281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.400331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.416508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.416561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 [2024-11-28 11:42:11.432879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.432918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 10924.40 IOPS, 85.35 MiB/s [2024-11-28T11:42:11.463Z] [2024-11-28 11:42:11.450261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.450325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.337 00:11:41.337 Latency(us) 00:11:41.337 [2024-11-28T11:42:11.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.337 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:41.337 Nvme1n1 : 5.01 10934.99 85.43 0.00 0.00 11688.59 4289.63 27286.81 00:11:41.337 [2024-11-28T11:42:11.463Z] =================================================================================================================== 00:11:41.337 [2024-11-28T11:42:11.463Z] Total : 10934.99 85.43 0.00 0.00 11688.59 4289.63 27286.81 00:11:41.337 [2024-11-28 11:42:11.461519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.337 [2024-11-28 11:42:11.461558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.473534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.473577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.485546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.485590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.497557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.497604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.509549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.509615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.521574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.521617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.533559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.533601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.545568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.545610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.557569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.557614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.569580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.569625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.597 [2024-11-28 11:42:11.581573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.597 [2024-11-28 11:42:11.581620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 [2024-11-28 11:42:11.593564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.598 [2024-11-28 11:42:11.593601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 [2024-11-28 11:42:11.605580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.598 [2024-11-28 11:42:11.605624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 [2024-11-28 11:42:11.617576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.598 [2024-11-28 11:42:11.617615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 [2024-11-28 11:42:11.629560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.598 [2024-11-28 11:42:11.629593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 [2024-11-28 11:42:11.641565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.598 [2024-11-28 11:42:11.641596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.598 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79572) - No such process 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79572 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.598 delay0 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.598 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:41.857 [2024-11-28 11:42:11.863420] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:48.435 Initializing NVMe Controllers 00:11:48.435 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.435 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:48.435 Initialization complete. Launching workers. 00:11:48.435 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 815 00:11:48.435 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1102, failed to submit 33 00:11:48.435 success 955, unsuccessful 147, failed 0 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.435 rmmod nvme_tcp 00:11:48.435 rmmod nvme_fabrics 00:11:48.435 rmmod nvme_keyring 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 79429 ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 79429 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 79429 ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 79429 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79429 00:11:48.435 killing process with pid 79429 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79429' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 79429 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 79429 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:48.435 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.436 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:48.694 00:11:48.694 real 0m24.437s 00:11:48.694 user 0m39.682s 00:11:48.694 sys 0m7.078s 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 ************************************ 00:11:48.694 END TEST nvmf_zcopy 00:11:48.694 ************************************ 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 ************************************ 00:11:48.694 START TEST nvmf_nmic 00:11:48.694 ************************************ 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.694 * Looking for test storage... 00:11:48.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.694 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.952 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.952 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.952 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.952 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.953 --rc genhtml_branch_coverage=1 00:11:48.953 --rc genhtml_function_coverage=1 00:11:48.953 --rc genhtml_legend=1 00:11:48.953 --rc geninfo_all_blocks=1 00:11:48.953 --rc geninfo_unexecuted_blocks=1 00:11:48.953 00:11:48.953 ' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.953 --rc genhtml_branch_coverage=1 00:11:48.953 --rc genhtml_function_coverage=1 00:11:48.953 --rc genhtml_legend=1 00:11:48.953 --rc geninfo_all_blocks=1 00:11:48.953 --rc geninfo_unexecuted_blocks=1 00:11:48.953 00:11:48.953 ' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.953 --rc genhtml_branch_coverage=1 00:11:48.953 --rc genhtml_function_coverage=1 00:11:48.953 --rc genhtml_legend=1 00:11:48.953 --rc geninfo_all_blocks=1 00:11:48.953 --rc geninfo_unexecuted_blocks=1 00:11:48.953 00:11:48.953 ' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.953 --rc genhtml_branch_coverage=1 00:11:48.953 --rc genhtml_function_coverage=1 00:11:48.953 --rc genhtml_legend=1 00:11:48.953 --rc geninfo_all_blocks=1 00:11:48.953 --rc geninfo_unexecuted_blocks=1 00:11:48.953 00:11:48.953 ' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.953 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:48.954 Cannot find device "nvmf_init_br" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:48.954 Cannot find device "nvmf_init_br2" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:48.954 Cannot find device "nvmf_tgt_br" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.954 Cannot find device "nvmf_tgt_br2" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:48.954 Cannot find device "nvmf_init_br" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:48.954 Cannot find device "nvmf_init_br2" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:48.954 Cannot find device "nvmf_tgt_br" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:48.954 Cannot find device "nvmf_tgt_br2" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:48.954 Cannot find device "nvmf_br" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:48.954 Cannot find device "nvmf_init_if" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:48.954 Cannot find device "nvmf_init_if2" 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:48.954 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:48.954 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:49.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:11:49.213 00:11:49.213 --- 10.0.0.3 ping statistics --- 00:11:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.213 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:49.213 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:49.213 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:49.213 00:11:49.213 --- 10.0.0.4 ping statistics --- 00:11:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.213 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:49.213 00:11:49.213 --- 10.0.0.1 ping statistics --- 00:11:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.213 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:49.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:49.213 00:11:49.213 --- 10.0.0.2 ping statistics --- 00:11:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.213 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=79957 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 79957 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 79957 ']' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.213 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.213 [2024-11-28 11:42:19.301449] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:49.213 [2024-11-28 11:42:19.301589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.472 [2024-11-28 11:42:19.434408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:49.472 [2024-11-28 11:42:19.464518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.472 [2024-11-28 11:42:19.523023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.472 [2024-11-28 11:42:19.523099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.472 [2024-11-28 11:42:19.523114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.472 [2024-11-28 11:42:19.523125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.472 [2024-11-28 11:42:19.523134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.472 [2024-11-28 11:42:19.524413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.472 [2024-11-28 11:42:19.524502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.472 [2024-11-28 11:42:19.524575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.472 [2024-11-28 11:42:19.524583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.472 [2024-11-28 11:42:19.584389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 [2024-11-28 11:42:19.693226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 Malloc0 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 [2024-11-28 11:42:19.763384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:49.731 test case1: single bdev can't be used in multiple subsystems 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 [2024-11-28 11:42:19.791176] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:49.731 [2024-11-28 11:42:19.791227] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:49.731 [2024-11-28 11:42:19.791242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.731 request: 00:11:49.731 { 00:11:49.731 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:49.731 "namespace": { 00:11:49.731 "bdev_name": "Malloc0", 00:11:49.731 "no_auto_visible": false, 00:11:49.731 "hide_metadata": false 00:11:49.731 }, 00:11:49.731 "method": "nvmf_subsystem_add_ns", 00:11:49.731 "req_id": 1 00:11:49.731 } 00:11:49.731 Got JSON-RPC error response 00:11:49.731 response: 00:11:49.731 { 00:11:49.731 "code": -32602, 00:11:49.731 "message": "Invalid parameters" 00:11:49.731 } 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:49.731 Adding namespace failed - expected result. 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:49.731 test case2: host connect to nvmf target in multiple paths 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.731 [2024-11-28 11:42:19.803336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.731 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:49.990 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:49.990 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.990 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.990 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.990 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:49.990 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:52.562 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:52.562 [global] 00:11:52.562 thread=1 00:11:52.562 invalidate=1 00:11:52.562 rw=write 00:11:52.562 time_based=1 00:11:52.562 runtime=1 00:11:52.562 ioengine=libaio 00:11:52.562 direct=1 00:11:52.562 bs=4096 00:11:52.562 iodepth=1 00:11:52.562 norandommap=0 00:11:52.562 numjobs=1 00:11:52.562 00:11:52.562 verify_dump=1 00:11:52.562 verify_backlog=512 00:11:52.562 verify_state_save=0 00:11:52.562 do_verify=1 00:11:52.562 verify=crc32c-intel 00:11:52.562 [job0] 00:11:52.562 filename=/dev/nvme0n1 00:11:52.562 Could not set queue depth (nvme0n1) 00:11:52.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.562 fio-3.35 00:11:52.562 Starting 1 thread 00:11:53.499 00:11:53.499 job0: (groupid=0, jobs=1): err= 0: pid=80036: Thu Nov 28 11:42:23 2024 00:11:53.499 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:53.499 slat (nsec): min=11519, max=58863, avg=16497.95, stdev=5296.93 00:11:53.499 clat (usec): min=150, max=780, avg=210.90, stdev=27.59 00:11:53.499 lat (usec): min=165, max=794, avg=227.39, stdev=28.22 00:11:53.499 clat percentiles (usec): 00:11:53.499 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:11:53.499 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:11:53.499 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:11:53.499 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 367], 99.95th=[ 441], 00:11:53.499 | 99.99th=[ 783] 00:11:53.499 write: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:11:53.499 slat (usec): min=16, max=147, avg=22.38, stdev= 6.32 00:11:53.499 clat (usec): min=87, max=851, avg=128.69, stdev=31.55 00:11:53.499 lat (usec): min=106, max=872, avg=151.07, stdev=33.18 00:11:53.499 clat percentiles (usec): 00:11:53.499 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 112], 00:11:53.499 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 126], 60.00th=[ 131], 00:11:53.499 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 157], 00:11:53.499 | 99.00th=[ 215], 99.50th=[ 318], 99.90th=[ 429], 99.95th=[ 783], 00:11:53.499 | 99.99th=[ 848] 00:11:53.499 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:53.499 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:53.499 lat (usec) : 100=1.71%, 250=94.95%, 500=3.29%, 1000=0.06% 00:11:53.499 cpu : usr=2.40%, sys=7.60%, ctx=5264, majf=0, minf=5 00:11:53.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.499 issued rwts: total=2560,2704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.499 00:11:53.499 Run status group 0 (all jobs): 00:11:53.499 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:53.499 WRITE: bw=10.6MiB/s (11.1MB/s), 10.6MiB/s-10.6MiB/s (11.1MB/s-11.1MB/s), io=10.6MiB (11.1MB), run=1001-1001msec 00:11:53.499 00:11:53.499 Disk stats (read/write): 00:11:53.499 nvme0n1: ios=2274/2560, merge=0/0, ticks=503/349, in_queue=852, util=91.48% 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.499 rmmod nvme_tcp 00:11:53.499 rmmod nvme_fabrics 00:11:53.499 rmmod nvme_keyring 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 79957 ']' 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 79957 00:11:53.499 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 79957 ']' 00:11:53.500 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 79957 00:11:53.500 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79957 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.758 killing process with pid 79957 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79957' 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 79957 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 79957 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.758 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.017 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:54.018 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.018 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:54.278 00:11:54.278 real 0m5.508s 00:11:54.278 user 0m16.439s 00:11:54.278 sys 0m2.000s 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.278 ************************************ 00:11:54.278 END TEST nvmf_nmic 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.278 ************************************ 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.278 ************************************ 00:11:54.278 START TEST nvmf_fio_target 00:11:54.278 ************************************ 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.278 * Looking for test storage... 00:11:54.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.278 --rc genhtml_branch_coverage=1 00:11:54.278 --rc genhtml_function_coverage=1 00:11:54.278 --rc genhtml_legend=1 00:11:54.278 --rc geninfo_all_blocks=1 00:11:54.278 --rc geninfo_unexecuted_blocks=1 00:11:54.278 00:11:54.278 ' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.278 --rc genhtml_branch_coverage=1 00:11:54.278 --rc genhtml_function_coverage=1 00:11:54.278 --rc genhtml_legend=1 00:11:54.278 --rc geninfo_all_blocks=1 00:11:54.278 --rc geninfo_unexecuted_blocks=1 00:11:54.278 00:11:54.278 ' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.278 --rc genhtml_branch_coverage=1 00:11:54.278 --rc genhtml_function_coverage=1 00:11:54.278 --rc genhtml_legend=1 00:11:54.278 --rc geninfo_all_blocks=1 00:11:54.278 --rc geninfo_unexecuted_blocks=1 00:11:54.278 00:11:54.278 ' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.278 --rc genhtml_branch_coverage=1 00:11:54.278 --rc genhtml_function_coverage=1 00:11:54.278 --rc genhtml_legend=1 00:11:54.278 --rc geninfo_all_blocks=1 00:11:54.278 --rc geninfo_unexecuted_blocks=1 00:11:54.278 00:11:54.278 ' 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.278 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.538 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:54.539 Cannot find device "nvmf_init_br" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:54.539 Cannot find device "nvmf_init_br2" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:54.539 Cannot find device "nvmf_tgt_br" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.539 Cannot find device "nvmf_tgt_br2" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:54.539 Cannot find device "nvmf_init_br" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:54.539 Cannot find device "nvmf_init_br2" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:54.539 Cannot find device "nvmf_tgt_br" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:54.539 Cannot find device "nvmf_tgt_br2" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:54.539 Cannot find device "nvmf_br" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:54.539 Cannot find device "nvmf_init_if" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:54.539 Cannot find device "nvmf_init_if2" 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.539 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:54.799 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:54.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:54.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:54.800 00:11:54.800 --- 10.0.0.3 ping statistics --- 00:11:54.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.800 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:54.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:54.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:11:54.800 00:11:54.800 --- 10.0.0.4 ping statistics --- 00:11:54.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.800 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:54.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:54.800 00:11:54.800 --- 10.0.0.1 ping statistics --- 00:11:54.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.800 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:54.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:11:54.800 00:11:54.800 --- 10.0.0.2 ping statistics --- 00:11:54.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.800 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=80268 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 80268 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 80268 ']' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.800 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.800 [2024-11-28 11:42:24.910599] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:54.800 [2024-11-28 11:42:24.911288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.063 [2024-11-28 11:42:25.039823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:55.063 [2024-11-28 11:42:25.066613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.063 [2024-11-28 11:42:25.113695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.063 [2024-11-28 11:42:25.113777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.063 [2024-11-28 11:42:25.113803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.063 [2024-11-28 11:42:25.113811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.063 [2024-11-28 11:42:25.113817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.063 [2024-11-28 11:42:25.115046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.063 [2024-11-28 11:42:25.115170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.063 [2024-11-28 11:42:25.115243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.063 [2024-11-28 11:42:25.115244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.064 [2024-11-28 11:42:25.171549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.323 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:55.581 [2024-11-28 11:42:25.520515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.581 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.839 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:55.839 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.406 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:56.406 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.665 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:56.665 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.923 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:56.923 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:57.193 11:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.798 11:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:57.798 11:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.056 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:58.056 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.315 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:58.315 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:58.573 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.831 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:58.831 11:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.090 11:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.090 11:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.348 11:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:59.606 [2024-11-28 11:42:29.674124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:59.606 11:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:59.865 11:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:00.123 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:00.382 11:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:02.284 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:02.284 [global] 00:12:02.284 thread=1 00:12:02.284 invalidate=1 00:12:02.284 rw=write 00:12:02.284 time_based=1 00:12:02.284 runtime=1 00:12:02.284 ioengine=libaio 00:12:02.284 direct=1 00:12:02.284 bs=4096 00:12:02.284 iodepth=1 00:12:02.285 norandommap=0 00:12:02.285 numjobs=1 00:12:02.285 00:12:02.285 verify_dump=1 00:12:02.285 verify_backlog=512 00:12:02.285 verify_state_save=0 00:12:02.285 do_verify=1 00:12:02.285 verify=crc32c-intel 00:12:02.285 [job0] 00:12:02.285 filename=/dev/nvme0n1 00:12:02.285 [job1] 00:12:02.285 filename=/dev/nvme0n2 00:12:02.285 [job2] 00:12:02.285 filename=/dev/nvme0n3 00:12:02.285 [job3] 00:12:02.285 filename=/dev/nvme0n4 00:12:02.543 Could not set queue depth (nvme0n1) 00:12:02.543 Could not set queue depth (nvme0n2) 00:12:02.543 Could not set queue depth (nvme0n3) 00:12:02.543 Could not set queue depth (nvme0n4) 00:12:02.543 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.543 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.543 fio-3.35 00:12:02.543 Starting 4 threads 00:12:03.918 00:12:03.918 job0: (groupid=0, jobs=1): err= 0: pid=80456: Thu Nov 28 11:42:33 2024 00:12:03.918 read: IOPS=1230, BW=4923KiB/s (5041kB/s)(4928KiB/1001msec) 00:12:03.918 slat (usec): min=11, max=100, avg=24.39, stdev= 7.97 00:12:03.918 clat (usec): min=219, max=2154, avg=405.02, stdev=115.57 00:12:03.918 lat (usec): min=241, max=2173, avg=429.41, stdev=119.38 00:12:03.918 clat percentiles (usec): 00:12:03.918 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 338], 00:12:03.918 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 383], 00:12:03.918 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 635], 95.00th=[ 676], 00:12:03.918 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 848], 99.95th=[ 2147], 00:12:03.918 | 99.99th=[ 2147] 00:12:03.918 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:03.918 slat (usec): min=20, max=100, avg=35.13, stdev= 7.59 00:12:03.918 clat (usec): min=119, max=500, avg=265.54, stdev=63.62 00:12:03.918 lat (usec): min=154, max=594, avg=300.66, stdev=65.40 00:12:03.918 clat percentiles (usec): 00:12:03.918 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 202], 00:12:03.918 | 30.00th=[ 237], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 281], 00:12:03.918 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 359], 95.00th=[ 375], 00:12:03.918 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 494], 99.95th=[ 502], 00:12:03.918 | 99.99th=[ 502] 00:12:03.918 bw ( KiB/s): min= 7656, max= 7656, per=31.61%, avg=7656.00, stdev= 0.00, samples=1 00:12:03.918 iops : min= 1914, max= 1914, avg=1914.00, stdev= 0.00, samples=1 00:12:03.918 lat (usec) : 250=20.85%, 500=73.59%, 750=5.42%, 1000=0.11% 00:12:03.918 lat (msec) : 4=0.04% 00:12:03.918 cpu : usr=2.00%, sys=6.90%, ctx=2769, majf=0, minf=5 00:12:03.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.918 issued rwts: total=1232,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.918 job1: (groupid=0, jobs=1): err= 0: pid=80457: Thu Nov 28 11:42:33 2024 00:12:03.918 read: IOPS=1128, BW=4515KiB/s (4624kB/s)(4520KiB/1001msec) 00:12:03.918 slat (nsec): min=18236, max=81017, avg=31347.71, stdev=10841.66 00:12:03.918 clat (usec): min=242, max=1050, avg=427.95, stdev=99.11 00:12:03.918 lat (usec): min=267, max=1102, avg=459.30, stdev=105.03 00:12:03.918 clat percentiles (usec): 00:12:03.918 | 1.00th=[ 306], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 359], 00:12:03.918 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 412], 00:12:03.918 | 70.00th=[ 429], 80.00th=[ 465], 90.00th=[ 619], 95.00th=[ 652], 00:12:03.918 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 832], 99.95th=[ 1057], 00:12:03.918 | 99.99th=[ 1057] 00:12:03.918 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:03.918 slat (nsec): min=26047, max=90803, avg=37098.99, stdev=8113.88 00:12:03.918 clat (usec): min=135, max=952, avg=269.88, stdev=72.26 00:12:03.919 lat (usec): min=172, max=986, avg=306.98, stdev=74.20 00:12:03.919 clat percentiles (usec): 00:12:03.919 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 204], 00:12:03.919 | 30.00th=[ 227], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 277], 00:12:03.919 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 396], 00:12:03.919 | 99.00th=[ 445], 99.50th=[ 502], 99.90th=[ 758], 99.95th=[ 955], 00:12:03.919 | 99.99th=[ 955] 00:12:03.919 bw ( KiB/s): min= 6864, max= 6864, per=28.34%, avg=6864.00, stdev= 0.00, samples=1 00:12:03.919 iops : min= 1716, max= 1716, avg=1716.00, stdev= 0.00, samples=1 00:12:03.919 lat (usec) : 250=24.57%, 500=68.08%, 750=7.16%, 1000=0.15% 00:12:03.919 lat (msec) : 2=0.04% 00:12:03.919 cpu : usr=1.80%, sys=7.60%, ctx=2666, majf=0, minf=15 00:12:03.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 issued rwts: total=1130,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.919 job2: (groupid=0, jobs=1): err= 0: pid=80458: Thu Nov 28 11:42:33 2024 00:12:03.919 read: IOPS=1168, BW=4675KiB/s (4788kB/s)(4680KiB/1001msec) 00:12:03.919 slat (nsec): min=14276, max=55748, avg=23842.50, stdev=6045.33 00:12:03.919 clat (usec): min=205, max=652, avg=378.09, stdev=60.98 00:12:03.919 lat (usec): min=221, max=684, avg=401.93, stdev=62.49 00:12:03.919 clat percentiles (usec): 00:12:03.919 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 338], 00:12:03.919 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:12:03.919 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 506], 00:12:03.919 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 652], 00:12:03.919 | 99.99th=[ 652] 00:12:03.919 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:03.919 slat (usec): min=21, max=106, avg=37.85, stdev=10.13 00:12:03.919 clat (usec): min=135, max=986, avg=301.57, stdev=91.41 00:12:03.919 lat (usec): min=161, max=1052, avg=339.42, stdev=97.76 00:12:03.919 clat percentiles (usec): 00:12:03.919 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 227], 00:12:03.919 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:12:03.919 | 70.00th=[ 326], 80.00th=[ 371], 90.00th=[ 449], 95.00th=[ 474], 00:12:03.919 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 734], 99.95th=[ 988], 00:12:03.919 | 99.99th=[ 988] 00:12:03.919 bw ( KiB/s): min= 6176, max= 6176, per=25.50%, avg=6176.00, stdev= 0.00, samples=1 00:12:03.919 iops : min= 1544, max= 1544, avg=1544.00, stdev= 0.00, samples=1 00:12:03.919 lat (usec) : 250=15.48%, 500=80.56%, 750=3.92%, 1000=0.04% 00:12:03.919 cpu : usr=2.40%, sys=6.40%, ctx=2716, majf=0, minf=7 00:12:03.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 issued rwts: total=1170,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.919 job3: (groupid=0, jobs=1): err= 0: pid=80459: Thu Nov 28 11:42:33 2024 00:12:03.919 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:03.919 slat (nsec): min=18778, max=88291, avg=31515.90, stdev=11238.18 00:12:03.919 clat (usec): min=265, max=1229, avg=419.93, stdev=78.34 00:12:03.919 lat (usec): min=297, max=1317, avg=451.44, stdev=84.69 00:12:03.919 clat percentiles (usec): 00:12:03.919 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 367], 00:12:03.919 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 416], 00:12:03.919 | 70.00th=[ 433], 80.00th=[ 469], 90.00th=[ 523], 95.00th=[ 562], 00:12:03.919 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 758], 99.95th=[ 1237], 00:12:03.919 | 99.99th=[ 1237] 00:12:03.919 write: IOPS=1452, BW=5810KiB/s (5950kB/s)(5816KiB/1001msec); 0 zone resets 00:12:03.919 slat (usec): min=23, max=260, avg=40.98, stdev=13.24 00:12:03.919 clat (usec): min=127, max=1442, avg=322.60, stdev=116.69 00:12:03.919 lat (usec): min=157, max=1511, avg=363.58, stdev=124.55 00:12:03.919 clat percentiles (usec): 00:12:03.919 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 206], 00:12:03.919 | 30.00th=[ 255], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 338], 00:12:03.919 | 70.00th=[ 379], 80.00th=[ 441], 90.00th=[ 486], 95.00th=[ 510], 00:12:03.919 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 1172], 99.95th=[ 1450], 00:12:03.919 | 99.99th=[ 1450] 00:12:03.919 bw ( KiB/s): min= 5160, max= 5160, per=21.30%, avg=5160.00, stdev= 0.00, samples=1 00:12:03.919 iops : min= 1290, max= 1290, avg=1290.00, stdev= 0.00, samples=1 00:12:03.919 lat (usec) : 250=16.99%, 500=73.28%, 750=9.56%, 1000=0.04% 00:12:03.919 lat (msec) : 2=0.12% 00:12:03.919 cpu : usr=2.50%, sys=6.80%, ctx=2478, majf=0, minf=9 00:12:03.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.919 issued rwts: total=1024,1454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.919 00:12:03.919 Run status group 0 (all jobs): 00:12:03.919 READ: bw=17.8MiB/s (18.6MB/s), 4092KiB/s-4923KiB/s (4190kB/s-5041kB/s), io=17.8MiB (18.7MB), run=1001-1001msec 00:12:03.919 WRITE: bw=23.7MiB/s (24.8MB/s), 5810KiB/s-6138KiB/s (5950kB/s-6285kB/s), io=23.7MiB (24.8MB), run=1001-1001msec 00:12:03.919 00:12:03.919 Disk stats (read/write): 00:12:03.919 nvme0n1: ios=1074/1412, merge=0/0, ticks=422/395, in_queue=817, util=88.28% 00:12:03.919 nvme0n2: ios=1059/1257, merge=0/0, ticks=451/355, in_queue=806, util=88.21% 00:12:03.919 nvme0n3: ios=1024/1248, merge=0/0, ticks=392/418, in_queue=810, util=89.11% 00:12:03.919 nvme0n4: ios=1018/1024, merge=0/0, ticks=441/363, in_queue=804, util=89.76% 00:12:03.919 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:03.919 [global] 00:12:03.919 thread=1 00:12:03.919 invalidate=1 00:12:03.919 rw=randwrite 00:12:03.919 time_based=1 00:12:03.919 runtime=1 00:12:03.919 ioengine=libaio 00:12:03.919 direct=1 00:12:03.919 bs=4096 00:12:03.919 iodepth=1 00:12:03.919 norandommap=0 00:12:03.919 numjobs=1 00:12:03.919 00:12:03.919 verify_dump=1 00:12:03.919 verify_backlog=512 00:12:03.919 verify_state_save=0 00:12:03.919 do_verify=1 00:12:03.919 verify=crc32c-intel 00:12:03.919 [job0] 00:12:03.919 filename=/dev/nvme0n1 00:12:03.919 [job1] 00:12:03.919 filename=/dev/nvme0n2 00:12:03.919 [job2] 00:12:03.919 filename=/dev/nvme0n3 00:12:03.919 [job3] 00:12:03.919 filename=/dev/nvme0n4 00:12:03.919 Could not set queue depth (nvme0n1) 00:12:03.919 Could not set queue depth (nvme0n2) 00:12:03.919 Could not set queue depth (nvme0n3) 00:12:03.919 Could not set queue depth (nvme0n4) 00:12:03.919 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.919 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.919 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.919 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.919 fio-3.35 00:12:03.919 Starting 4 threads 00:12:05.302 00:12:05.302 job0: (groupid=0, jobs=1): err= 0: pid=80512: Thu Nov 28 11:42:35 2024 00:12:05.302 read: IOPS=1260, BW=5043KiB/s (5164kB/s)(5048KiB/1001msec) 00:12:05.302 slat (nsec): min=16389, max=68920, avg=26299.61, stdev=7769.02 00:12:05.302 clat (usec): min=226, max=1174, avg=425.95, stdev=124.78 00:12:05.302 lat (usec): min=249, max=1193, avg=452.25, stdev=127.83 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 338], 00:12:05.302 | 30.00th=[ 351], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 400], 00:12:05.302 | 70.00th=[ 433], 80.00th=[ 498], 90.00th=[ 660], 95.00th=[ 693], 00:12:05.302 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 1090], 99.95th=[ 1172], 00:12:05.302 | 99.99th=[ 1172] 00:12:05.302 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:05.302 slat (usec): min=21, max=166, avg=33.33, stdev= 7.25 00:12:05.302 clat (usec): min=131, max=439, avg=240.26, stdev=54.36 00:12:05.302 lat (usec): min=162, max=507, avg=273.58, stdev=55.39 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 192], 00:12:05.302 | 30.00th=[ 206], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 249], 00:12:05.302 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 351], 00:12:05.302 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 441], 00:12:05.302 | 99.99th=[ 441] 00:12:05.302 bw ( KiB/s): min= 8192, max= 8192, per=31.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.302 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.302 lat (usec) : 250=33.70%, 500=57.40%, 750=8.58%, 1000=0.25% 00:12:05.302 lat (msec) : 2=0.07% 00:12:05.302 cpu : usr=1.60%, sys=7.20%, ctx=2798, majf=0, minf=9 00:12:05.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 issued rwts: total=1262,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.302 job1: (groupid=0, jobs=1): err= 0: pid=80513: Thu Nov 28 11:42:35 2024 00:12:05.302 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:05.302 slat (nsec): min=10976, max=49231, avg=16054.73, stdev=4967.05 00:12:05.302 clat (usec): min=142, max=1846, avg=324.34, stdev=88.79 00:12:05.302 lat (usec): min=154, max=1860, avg=340.39, stdev=90.43 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 188], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 247], 00:12:05.302 | 30.00th=[ 277], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 343], 00:12:05.302 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 465], 00:12:05.302 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 1582], 99.95th=[ 1844], 00:12:05.302 | 99.99th=[ 1844] 00:12:05.302 write: IOPS=1808, BW=7233KiB/s (7406kB/s)(7240KiB/1001msec); 0 zone resets 00:12:05.302 slat (usec): min=11, max=938, avg=23.32, stdev=22.64 00:12:05.302 clat (usec): min=110, max=975, avg=236.31, stdev=72.93 00:12:05.302 lat (usec): min=128, max=1243, avg=259.63, stdev=78.63 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 124], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 167], 00:12:05.302 | 30.00th=[ 182], 40.00th=[ 208], 50.00th=[ 237], 60.00th=[ 258], 00:12:05.302 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 351], 00:12:05.302 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 725], 99.95th=[ 979], 00:12:05.302 | 99.99th=[ 979] 00:12:05.302 bw ( KiB/s): min= 8192, max= 8192, per=31.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.302 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.302 lat (usec) : 250=40.17%, 500=59.00%, 750=0.75%, 1000=0.03% 00:12:05.302 lat (msec) : 2=0.06% 00:12:05.302 cpu : usr=1.90%, sys=5.30%, ctx=3347, majf=0, minf=11 00:12:05.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 issued rwts: total=1536,1810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.302 job2: (groupid=0, jobs=1): err= 0: pid=80514: Thu Nov 28 11:42:35 2024 00:12:05.302 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:05.302 slat (nsec): min=16200, max=68759, avg=27640.22, stdev=7156.80 00:12:05.302 clat (usec): min=167, max=1412, avg=446.33, stdev=100.96 00:12:05.302 lat (usec): min=202, max=1434, avg=473.97, stdev=103.45 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 285], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 367], 00:12:05.302 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 457], 00:12:05.302 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 570], 95.00th=[ 635], 00:12:05.302 | 99.00th=[ 701], 99.50th=[ 758], 99.90th=[ 1123], 99.95th=[ 1418], 00:12:05.302 | 99.99th=[ 1418] 00:12:05.302 write: IOPS=1519, BW=6078KiB/s (6224kB/s)(6084KiB/1001msec); 0 zone resets 00:12:05.302 slat (nsec): min=22124, max=94866, avg=37533.99, stdev=11327.52 00:12:05.302 clat (usec): min=140, max=1016, avg=295.15, stdev=104.90 00:12:05.302 lat (usec): min=166, max=1057, avg=332.68, stdev=112.71 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 194], 00:12:05.302 | 30.00th=[ 219], 40.00th=[ 249], 50.00th=[ 277], 60.00th=[ 302], 00:12:05.302 | 70.00th=[ 338], 80.00th=[ 383], 90.00th=[ 457], 95.00th=[ 498], 00:12:05.302 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 881], 99.95th=[ 1020], 00:12:05.302 | 99.99th=[ 1020] 00:12:05.302 bw ( KiB/s): min= 4784, max= 4784, per=18.68%, avg=4784.00, stdev= 0.00, samples=1 00:12:05.302 iops : min= 1196, max= 1196, avg=1196.00, stdev= 0.00, samples=1 00:12:05.302 lat (usec) : 250=24.01%, 500=61.77%, 750=13.91%, 1000=0.20% 00:12:05.302 lat (msec) : 2=0.12% 00:12:05.302 cpu : usr=1.80%, sys=6.90%, ctx=2555, majf=0, minf=15 00:12:05.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 issued rwts: total=1024,1521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.302 job3: (groupid=0, jobs=1): err= 0: pid=80515: Thu Nov 28 11:42:35 2024 00:12:05.302 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:05.302 slat (nsec): min=8434, max=51948, avg=17137.47, stdev=5015.44 00:12:05.302 clat (usec): min=184, max=6260, avg=349.56, stdev=267.11 00:12:05.302 lat (usec): min=202, max=6273, avg=366.70, stdev=267.49 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 204], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 262], 00:12:05.302 | 30.00th=[ 293], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 351], 00:12:05.302 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 433], 95.00th=[ 474], 00:12:05.302 | 99.00th=[ 529], 99.50th=[ 1483], 99.90th=[ 5080], 99.95th=[ 6259], 00:12:05.302 | 99.99th=[ 6259] 00:12:05.302 write: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec); 0 zone resets 00:12:05.302 slat (nsec): min=11155, max=88844, avg=24689.16, stdev=6060.32 00:12:05.302 clat (usec): min=128, max=1174, avg=254.21, stdev=72.70 00:12:05.302 lat (usec): min=150, max=1231, avg=278.90, stdev=73.24 00:12:05.302 clat percentiles (usec): 00:12:05.302 | 1.00th=[ 143], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 196], 00:12:05.302 | 30.00th=[ 212], 40.00th=[ 233], 50.00th=[ 251], 60.00th=[ 269], 00:12:05.302 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 355], 00:12:05.302 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 1090], 99.95th=[ 1172], 00:12:05.302 | 99.99th=[ 1172] 00:12:05.302 bw ( KiB/s): min= 8192, max= 8192, per=31.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.302 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.302 lat (usec) : 250=32.08%, 500=66.66%, 750=0.88%, 1000=0.06% 00:12:05.302 lat (msec) : 2=0.13%, 4=0.10%, 10=0.10% 00:12:05.302 cpu : usr=1.50%, sys=5.60%, ctx=3079, majf=0, minf=13 00:12:05.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.302 issued rwts: total=1536,1541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.302 00:12:05.302 Run status group 0 (all jobs): 00:12:05.302 READ: bw=20.9MiB/s (21.9MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=20.9MiB (21.9MB), run=1001-1001msec 00:12:05.302 WRITE: bw=25.0MiB/s (26.2MB/s), 6078KiB/s-7233KiB/s (6224kB/s-7406kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:12:05.302 00:12:05.302 Disk stats (read/write): 00:12:05.302 nvme0n1: ios=1081/1536, merge=0/0, ticks=446/395, in_queue=841, util=89.18% 00:12:05.302 nvme0n2: ios=1466/1536, merge=0/0, ticks=477/343, in_queue=820, util=88.70% 00:12:05.302 nvme0n3: ios=1039/1024, merge=0/0, ticks=486/340, in_queue=826, util=89.41% 00:12:05.302 nvme0n4: ios=1209/1536, merge=0/0, ticks=389/382, in_queue=771, util=88.83% 00:12:05.302 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:05.302 [global] 00:12:05.302 thread=1 00:12:05.302 invalidate=1 00:12:05.302 rw=write 00:12:05.302 time_based=1 00:12:05.302 runtime=1 00:12:05.302 ioengine=libaio 00:12:05.302 direct=1 00:12:05.302 bs=4096 00:12:05.302 iodepth=128 00:12:05.302 norandommap=0 00:12:05.302 numjobs=1 00:12:05.302 00:12:05.302 verify_dump=1 00:12:05.302 verify_backlog=512 00:12:05.302 verify_state_save=0 00:12:05.302 do_verify=1 00:12:05.302 verify=crc32c-intel 00:12:05.302 [job0] 00:12:05.302 filename=/dev/nvme0n1 00:12:05.302 [job1] 00:12:05.302 filename=/dev/nvme0n2 00:12:05.302 [job2] 00:12:05.302 filename=/dev/nvme0n3 00:12:05.302 [job3] 00:12:05.302 filename=/dev/nvme0n4 00:12:05.302 Could not set queue depth (nvme0n1) 00:12:05.302 Could not set queue depth (nvme0n2) 00:12:05.302 Could not set queue depth (nvme0n3) 00:12:05.302 Could not set queue depth (nvme0n4) 00:12:05.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.303 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.303 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.303 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.303 fio-3.35 00:12:05.303 Starting 4 threads 00:12:06.677 00:12:06.677 job0: (groupid=0, jobs=1): err= 0: pid=80570: Thu Nov 28 11:42:36 2024 00:12:06.677 read: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1004msec) 00:12:06.677 slat (usec): min=3, max=9513, avg=156.88, stdev=824.76 00:12:06.677 clat (usec): min=1329, max=38192, avg=20081.83, stdev=5615.49 00:12:06.677 lat (usec): min=4575, max=38211, avg=20238.72, stdev=5599.33 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[ 5276], 5.00th=[13960], 10.00th=[15664], 20.00th=[16057], 00:12:06.677 | 30.00th=[16188], 40.00th=[16581], 50.00th=[18220], 60.00th=[21365], 00:12:06.677 | 70.00th=[23200], 80.00th=[23725], 90.00th=[26346], 95.00th=[32113], 00:12:06.677 | 99.00th=[35390], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:12:06.677 | 99.99th=[38011] 00:12:06.677 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:06.677 slat (usec): min=12, max=8263, avg=128.13, stdev=610.87 00:12:06.677 clat (usec): min=10229, max=34796, avg=17001.08, stdev=4101.29 00:12:06.677 lat (usec): min=12332, max=34836, avg=17129.21, stdev=4077.33 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[11207], 5.00th=[12780], 10.00th=[12911], 20.00th=[13173], 00:12:06.677 | 30.00th=[13698], 40.00th=[15664], 50.00th=[16909], 60.00th=[17171], 00:12:06.677 | 70.00th=[17695], 80.00th=[19530], 90.00th=[22414], 95.00th=[24249], 00:12:06.677 | 99.00th=[30540], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:12:06.677 | 99.99th=[34866] 00:12:06.677 bw ( KiB/s): min=13320, max=15352, per=29.76%, avg=14336.00, stdev=1436.84, samples=2 00:12:06.677 iops : min= 3330, max= 3838, avg=3584.00, stdev=359.21, samples=2 00:12:06.677 lat (msec) : 2=0.01%, 10=0.93%, 20=68.56%, 50=30.49% 00:12:06.677 cpu : usr=4.49%, sys=9.37%, ctx=216, majf=0, minf=7 00:12:06.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:06.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.677 issued rwts: total=3265,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.677 job1: (groupid=0, jobs=1): err= 0: pid=80571: Thu Nov 28 11:42:36 2024 00:12:06.677 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:06.677 slat (usec): min=6, max=5738, avg=165.05, stdev=695.43 00:12:06.677 clat (usec): min=13812, max=37919, avg=21256.59, stdev=3993.75 00:12:06.677 lat (usec): min=13834, max=38640, avg=21421.65, stdev=4052.50 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[14353], 5.00th=[17171], 10.00th=[17433], 20.00th=[17695], 00:12:06.677 | 30.00th=[17957], 40.00th=[18482], 50.00th=[20055], 60.00th=[22414], 00:12:06.677 | 70.00th=[23987], 80.00th=[25560], 90.00th=[26084], 95.00th=[27132], 00:12:06.677 | 99.00th=[30278], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:12:06.677 | 99.99th=[38011] 00:12:06.677 write: IOPS=2806, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1004msec); 0 zone resets 00:12:06.677 slat (usec): min=11, max=7705, avg=195.75, stdev=728.65 00:12:06.677 clat (usec): min=3172, max=54999, avg=25525.94, stdev=11918.48 00:12:06.677 lat (usec): min=5778, max=55032, avg=25721.68, stdev=11995.05 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[11076], 5.00th=[12256], 10.00th=[13829], 20.00th=[15008], 00:12:06.677 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19792], 60.00th=[27395], 00:12:06.677 | 70.00th=[32375], 80.00th=[36963], 90.00th=[45351], 95.00th=[48497], 00:12:06.677 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:12:06.677 | 99.99th=[54789] 00:12:06.677 bw ( KiB/s): min= 9240, max=12288, per=22.34%, avg=10764.00, stdev=2155.26, samples=2 00:12:06.677 iops : min= 2310, max= 3072, avg=2691.00, stdev=538.82, samples=2 00:12:06.677 lat (msec) : 4=0.02%, 10=0.43%, 20=49.63%, 50=48.01%, 100=1.92% 00:12:06.677 cpu : usr=3.19%, sys=8.77%, ctx=332, majf=0, minf=6 00:12:06.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:06.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.677 issued rwts: total=2560,2818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.677 job2: (groupid=0, jobs=1): err= 0: pid=80572: Thu Nov 28 11:42:36 2024 00:12:06.677 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:12:06.677 slat (usec): min=6, max=6035, avg=181.21, stdev=917.94 00:12:06.677 clat (usec): min=17906, max=25801, avg=23951.88, stdev=1142.81 00:12:06.677 lat (usec): min=22754, max=25825, avg=24133.09, stdev=683.38 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[18482], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:12:06.677 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:12:06.677 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:12:06.677 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:12:06.677 | 99.99th=[25822] 00:12:06.677 write: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1002msec); 0 zone resets 00:12:06.677 slat (usec): min=17, max=8086, avg=179.94, stdev=861.01 00:12:06.677 clat (usec): min=1908, max=26520, avg=22754.82, stdev=2783.70 00:12:06.677 lat (usec): min=1932, max=26544, avg=22934.76, stdev=2658.54 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[ 7439], 5.00th=[18744], 10.00th=[22414], 20.00th=[22676], 00:12:06.677 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:12:06.677 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24249], 95.00th=[24249], 00:12:06.677 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:12:06.677 | 99.99th=[26608] 00:12:06.677 bw ( KiB/s): min= 9360, max=12288, per=22.47%, avg=10824.00, stdev=2070.41, samples=2 00:12:06.677 iops : min= 2340, max= 3072, avg=2706.00, stdev=517.60, samples=2 00:12:06.677 lat (msec) : 2=0.07%, 4=0.24%, 10=0.59%, 20=4.19%, 50=94.90% 00:12:06.677 cpu : usr=3.30%, sys=8.59%, ctx=169, majf=0, minf=9 00:12:06.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:06.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.677 issued rwts: total=2560,2833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.677 job3: (groupid=0, jobs=1): err= 0: pid=80573: Thu Nov 28 11:42:36 2024 00:12:06.677 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:12:06.677 slat (usec): min=6, max=8148, avg=186.97, stdev=956.58 00:12:06.677 clat (usec): min=15477, max=27077, avg=23545.19, stdev=1671.75 00:12:06.677 lat (usec): min=20217, max=27094, avg=23732.16, stdev=1423.89 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[18220], 5.00th=[20317], 10.00th=[21365], 20.00th=[22152], 00:12:06.677 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[24249], 00:12:06.677 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25297], 95.00th=[26346], 00:12:06.677 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:12:06.677 | 99.99th=[27132] 00:12:06.677 write: IOPS=2848, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1003msec); 0 zone resets 00:12:06.677 slat (usec): min=12, max=5615, avg=173.38, stdev=832.31 00:12:06.677 clat (usec): min=2126, max=27548, avg=23136.16, stdev=2922.30 00:12:06.677 lat (usec): min=2150, max=27592, avg=23309.54, stdev=2793.73 00:12:06.677 clat percentiles (usec): 00:12:06.677 | 1.00th=[ 7635], 5.00th=[18744], 10.00th=[21103], 20.00th=[22414], 00:12:06.677 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:12:06.678 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[26870], 00:12:06.678 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:12:06.678 | 99.99th=[27657] 00:12:06.678 bw ( KiB/s): min= 9552, max=12288, per=22.67%, avg=10920.00, stdev=1934.64, samples=2 00:12:06.678 iops : min= 2388, max= 3072, avg=2730.00, stdev=483.66, samples=2 00:12:06.678 lat (msec) : 4=0.17%, 10=0.59%, 20=4.36%, 50=94.89% 00:12:06.678 cpu : usr=2.50%, sys=9.18%, ctx=171, majf=0, minf=8 00:12:06.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:06.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.678 issued rwts: total=2560,2857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.678 00:12:06.678 Run status group 0 (all jobs): 00:12:06.678 READ: bw=42.6MiB/s (44.7MB/s), 9.96MiB/s-12.7MiB/s (10.4MB/s-13.3MB/s), io=42.8MiB (44.8MB), run=1002-1004msec 00:12:06.678 WRITE: bw=47.0MiB/s (49.3MB/s), 11.0MiB/s-13.9MiB/s (11.5MB/s-14.6MB/s), io=47.2MiB (49.5MB), run=1002-1004msec 00:12:06.678 00:12:06.678 Disk stats (read/write): 00:12:06.678 nvme0n1: ios=2801/3072, merge=0/0, ticks=13565/11485, in_queue=25050, util=89.56% 00:12:06.678 nvme0n2: ios=2163/2560, merge=0/0, ticks=15230/19216, in_queue=34446, util=88.75% 00:12:06.678 nvme0n3: ios=2101/2560, merge=0/0, ticks=11734/13654, in_queue=25388, util=89.83% 00:12:06.678 nvme0n4: ios=2112/2560, merge=0/0, ticks=12377/13205, in_queue=25582, util=89.88% 00:12:06.678 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:06.678 [global] 00:12:06.678 thread=1 00:12:06.678 invalidate=1 00:12:06.678 rw=randwrite 00:12:06.678 time_based=1 00:12:06.678 runtime=1 00:12:06.678 ioengine=libaio 00:12:06.678 direct=1 00:12:06.678 bs=4096 00:12:06.678 iodepth=128 00:12:06.678 norandommap=0 00:12:06.678 numjobs=1 00:12:06.678 00:12:06.678 verify_dump=1 00:12:06.678 verify_backlog=512 00:12:06.678 verify_state_save=0 00:12:06.678 do_verify=1 00:12:06.678 verify=crc32c-intel 00:12:06.678 [job0] 00:12:06.678 filename=/dev/nvme0n1 00:12:06.678 [job1] 00:12:06.678 filename=/dev/nvme0n2 00:12:06.678 [job2] 00:12:06.678 filename=/dev/nvme0n3 00:12:06.678 [job3] 00:12:06.678 filename=/dev/nvme0n4 00:12:06.678 Could not set queue depth (nvme0n1) 00:12:06.678 Could not set queue depth (nvme0n2) 00:12:06.678 Could not set queue depth (nvme0n3) 00:12:06.678 Could not set queue depth (nvme0n4) 00:12:06.678 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.678 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.678 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.678 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.678 fio-3.35 00:12:06.678 Starting 4 threads 00:12:08.053 00:12:08.053 job0: (groupid=0, jobs=1): err= 0: pid=80637: Thu Nov 28 11:42:37 2024 00:12:08.053 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:12:08.053 slat (usec): min=6, max=8565, avg=133.06, stdev=656.74 00:12:08.053 clat (usec): min=10094, max=26525, avg=17051.17, stdev=2128.55 00:12:08.053 lat (usec): min=10989, max=26586, avg=17184.23, stdev=2148.75 00:12:08.053 clat percentiles (usec): 00:12:08.053 | 1.00th=[11994], 5.00th=[13566], 10.00th=[14746], 20.00th=[15270], 00:12:08.053 | 30.00th=[15926], 40.00th=[16450], 50.00th=[17171], 60.00th=[17433], 00:12:08.053 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19530], 95.00th=[20841], 00:12:08.053 | 99.00th=[23462], 99.50th=[24511], 99.90th=[25560], 99.95th=[26346], 00:12:08.054 | 99.99th=[26608] 00:12:08.054 write: IOPS=3918, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1005msec); 0 zone resets 00:12:08.054 slat (usec): min=10, max=8824, avg=124.87, stdev=682.19 00:12:08.054 clat (usec): min=693, max=27294, avg=16686.95, stdev=2424.93 00:12:08.054 lat (usec): min=7536, max=27355, avg=16811.82, stdev=2497.00 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[ 8356], 5.00th=[12911], 10.00th=[14091], 20.00th=[15008], 00:12:08.054 | 30.00th=[15795], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:12:08.054 | 70.00th=[17695], 80.00th=[18482], 90.00th=[19268], 95.00th=[20055], 00:12:08.054 | 99.00th=[22938], 99.50th=[24773], 99.90th=[25822], 99.95th=[27132], 00:12:08.054 | 99.99th=[27395] 00:12:08.054 bw ( KiB/s): min=14480, max=16032, per=28.22%, avg=15256.00, stdev=1097.43, samples=2 00:12:08.054 iops : min= 3620, max= 4008, avg=3814.00, stdev=274.36, samples=2 00:12:08.054 lat (usec) : 750=0.01% 00:12:08.054 lat (msec) : 10=1.02%, 20=92.78%, 50=6.18% 00:12:08.054 cpu : usr=4.08%, sys=11.55%, ctx=318, majf=0, minf=6 00:12:08.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.054 issued rwts: total=3584,3938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.054 job1: (groupid=0, jobs=1): err= 0: pid=80638: Thu Nov 28 11:42:37 2024 00:12:08.054 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1011msec) 00:12:08.054 slat (usec): min=6, max=16437, avg=144.43, stdev=986.20 00:12:08.054 clat (usec): min=7751, max=64250, avg=19602.04, stdev=7705.40 00:12:08.054 lat (usec): min=9304, max=64287, avg=19746.47, stdev=7757.30 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[10552], 5.00th=[13304], 10.00th=[15926], 20.00th=[16581], 00:12:08.054 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[18220], 00:12:08.054 | 70.00th=[18744], 80.00th=[19268], 90.00th=[21365], 95.00th=[38536], 00:12:08.054 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58983], 99.95th=[62129], 00:12:08.054 | 99.99th=[64226] 00:12:08.054 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:12:08.054 slat (usec): min=8, max=13794, avg=147.71, stdev=855.38 00:12:08.054 clat (usec): min=8176, max=45376, avg=18905.22, stdev=6533.67 00:12:08.054 lat (usec): min=11114, max=45402, avg=19052.93, stdev=6538.74 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[11076], 5.00th=[13698], 10.00th=[14353], 20.00th=[15139], 00:12:08.054 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16909], 60.00th=[17433], 00:12:08.054 | 70.00th=[18220], 80.00th=[19530], 90.00th=[30540], 95.00th=[36963], 00:12:08.054 | 99.00th=[40633], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:12:08.054 | 99.99th=[45351] 00:12:08.054 bw ( KiB/s): min=11328, max=16408, per=25.65%, avg=13868.00, stdev=3592.10, samples=2 00:12:08.054 iops : min= 2832, max= 4102, avg=3467.00, stdev=898.03, samples=2 00:12:08.054 lat (msec) : 10=0.48%, 20=84.21%, 50=14.48%, 100=0.83% 00:12:08.054 cpu : usr=3.47%, sys=9.60%, ctx=237, majf=0, minf=3 00:12:08.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.054 issued rwts: total=3079,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.054 job2: (groupid=0, jobs=1): err= 0: pid=80639: Thu Nov 28 11:42:37 2024 00:12:08.054 read: IOPS=2486, BW=9947KiB/s (10.2MB/s)(9.82MiB/1011msec) 00:12:08.054 slat (usec): min=8, max=13395, avg=201.06, stdev=1220.15 00:12:08.054 clat (usec): min=10669, max=48015, avg=27009.14, stdev=6714.75 00:12:08.054 lat (usec): min=10681, max=51471, avg=27210.20, stdev=6756.87 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[14877], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:12:08.054 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:12:08.054 | 70.00th=[25560], 80.00th=[26870], 90.00th=[40109], 95.00th=[42730], 00:12:08.054 | 99.00th=[46400], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:12:08.054 | 99.99th=[47973] 00:12:08.054 write: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec); 0 zone resets 00:12:08.054 slat (usec): min=6, max=18884, avg=184.04, stdev=1210.32 00:12:08.054 clat (usec): min=11974, max=38346, avg=23521.62, stdev=3491.27 00:12:08.054 lat (usec): min=13366, max=38388, avg=23705.66, stdev=3368.52 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[13829], 5.00th=[19006], 10.00th=[19792], 20.00th=[21627], 00:12:08.054 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:12:08.054 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28967], 95.00th=[30540], 00:12:08.054 | 99.00th=[32375], 99.50th=[32375], 99.90th=[38011], 99.95th=[38011], 00:12:08.054 | 99.99th=[38536] 00:12:08.054 bw ( KiB/s): min= 8192, max=12288, per=18.94%, avg=10240.00, stdev=2896.31, samples=2 00:12:08.054 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:12:08.054 lat (msec) : 20=7.06%, 50=92.94% 00:12:08.054 cpu : usr=2.28%, sys=8.32%, ctx=183, majf=0, minf=5 00:12:08.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.054 issued rwts: total=2514,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.054 job3: (groupid=0, jobs=1): err= 0: pid=80640: Thu Nov 28 11:42:37 2024 00:12:08.054 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:12:08.054 slat (usec): min=9, max=11399, avg=144.77, stdev=970.68 00:12:08.054 clat (usec): min=9793, max=33076, avg=20019.20, stdev=2788.38 00:12:08.054 lat (usec): min=9808, max=39721, avg=20163.98, stdev=2829.38 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[11863], 5.00th=[16057], 10.00th=[16909], 20.00th=[18482], 00:12:08.054 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20055], 60.00th=[20579], 00:12:08.054 | 70.00th=[21103], 80.00th=[22152], 90.00th=[22676], 95.00th=[23200], 00:12:08.054 | 99.00th=[30802], 99.50th=[31851], 99.90th=[32900], 99.95th=[32900], 00:12:08.054 | 99.99th=[33162] 00:12:08.054 write: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:12:08.054 slat (usec): min=10, max=17382, avg=147.89, stdev=968.31 00:12:08.054 clat (usec): min=640, max=30162, avg=18380.45, stdev=2698.75 00:12:08.054 lat (usec): min=7993, max=30189, avg=18528.34, stdev=2562.64 00:12:08.054 clat percentiles (usec): 00:12:08.054 | 1.00th=[ 8979], 5.00th=[14877], 10.00th=[16450], 20.00th=[17171], 00:12:08.054 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:12:08.054 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20317], 95.00th=[20841], 00:12:08.054 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[30278], 00:12:08.054 | 99.99th=[30278] 00:12:08.054 bw ( KiB/s): min=12641, max=15024, per=25.58%, avg=13832.50, stdev=1685.04, samples=2 00:12:08.054 iops : min= 3160, max= 3756, avg=3458.00, stdev=421.44, samples=2 00:12:08.054 lat (usec) : 750=0.02% 00:12:08.054 lat (msec) : 10=1.10%, 20=67.41%, 50=31.48% 00:12:08.054 cpu : usr=3.88%, sys=9.56%, ctx=161, majf=0, minf=7 00:12:08.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.054 issued rwts: total=3072,3583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.054 00:12:08.054 Run status group 0 (all jobs): 00:12:08.054 READ: bw=47.3MiB/s (49.6MB/s), 9947KiB/s-13.9MiB/s (10.2MB/s-14.6MB/s), io=47.8MiB (50.2MB), run=1005-1011msec 00:12:08.054 WRITE: bw=52.8MiB/s (55.4MB/s), 9.89MiB/s-15.3MiB/s (10.4MB/s-16.0MB/s), io=53.4MiB (56.0MB), run=1005-1011msec 00:12:08.054 00:12:08.054 Disk stats (read/write): 00:12:08.054 nvme0n1: ios=3117/3122, merge=0/0, ticks=25906/23438, in_queue=49344, util=87.68% 00:12:08.054 nvme0n2: ios=3019/3072, merge=0/0, ticks=50419/47460, in_queue=97879, util=86.32% 00:12:08.054 nvme0n3: ios=2048/2492, merge=0/0, ticks=48169/52050, in_queue=100219, util=88.54% 00:12:08.054 nvme0n4: ios=2560/2880, merge=0/0, ticks=50409/51395, in_queue=101804, util=89.46% 00:12:08.054 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:08.054 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80653 00:12:08.054 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:08.054 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:08.054 [global] 00:12:08.054 thread=1 00:12:08.054 invalidate=1 00:12:08.054 rw=read 00:12:08.054 time_based=1 00:12:08.054 runtime=10 00:12:08.054 ioengine=libaio 00:12:08.054 direct=1 00:12:08.054 bs=4096 00:12:08.054 iodepth=1 00:12:08.054 norandommap=1 00:12:08.054 numjobs=1 00:12:08.054 00:12:08.054 [job0] 00:12:08.054 filename=/dev/nvme0n1 00:12:08.054 [job1] 00:12:08.054 filename=/dev/nvme0n2 00:12:08.054 [job2] 00:12:08.054 filename=/dev/nvme0n3 00:12:08.054 [job3] 00:12:08.054 filename=/dev/nvme0n4 00:12:08.054 Could not set queue depth (nvme0n1) 00:12:08.054 Could not set queue depth (nvme0n2) 00:12:08.054 Could not set queue depth (nvme0n3) 00:12:08.054 Could not set queue depth (nvme0n4) 00:12:08.054 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.054 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.054 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.054 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.054 fio-3.35 00:12:08.054 Starting 4 threads 00:12:11.341 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:11.341 fio: pid=80696, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.341 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33947648, buflen=4096 00:12:11.341 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:11.341 fio: pid=80695, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.341 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36495360, buflen=4096 00:12:11.341 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.341 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:11.600 fio: pid=80693, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.600 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=48123904, buflen=4096 00:12:11.859 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.859 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:11.859 fio: pid=80694, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.859 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60059648, buflen=4096 00:12:12.118 00:12:12.118 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80693: Thu Nov 28 11:42:42 2024 00:12:12.118 read: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(45.9MiB/3436msec) 00:12:12.118 slat (usec): min=10, max=12452, avg=18.43, stdev=177.15 00:12:12.118 clat (usec): min=3, max=7993, avg=272.41, stdev=100.05 00:12:12.118 lat (usec): min=151, max=12867, avg=290.84, stdev=205.36 00:12:12.118 clat percentiles (usec): 00:12:12.118 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:12:12.118 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:12:12.118 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 351], 00:12:12.118 | 99.00th=[ 408], 99.50th=[ 453], 99.90th=[ 947], 99.95th=[ 2114], 00:12:12.118 | 99.99th=[ 3359] 00:12:12.118 bw ( KiB/s): min=12972, max=14608, per=29.79%, avg=13978.00, stdev=581.05, samples=6 00:12:12.118 iops : min= 3243, max= 3652, avg=3494.50, stdev=145.26, samples=6 00:12:12.118 lat (usec) : 4=0.01%, 250=33.30%, 500=66.36%, 750=0.19%, 1000=0.06% 00:12:12.118 lat (msec) : 2=0.03%, 4=0.04%, 10=0.01% 00:12:12.118 cpu : usr=0.99%, sys=4.66%, ctx=11758, majf=0, minf=1 00:12:12.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.118 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.118 issued rwts: total=11750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.118 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80694: Thu Nov 28 11:42:42 2024 00:12:12.118 read: IOPS=3944, BW=15.4MiB/s (16.2MB/s)(57.3MiB/3718msec) 00:12:12.118 slat (usec): min=7, max=15816, avg=18.35, stdev=257.94 00:12:12.118 clat (usec): min=134, max=3562, avg=233.69, stdev=75.98 00:12:12.118 lat (usec): min=149, max=16337, avg=252.04, stdev=271.81 00:12:12.118 clat percentiles (usec): 00:12:12.118 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 190], 00:12:12.118 | 30.00th=[ 200], 40.00th=[ 212], 50.00th=[ 229], 60.00th=[ 241], 00:12:12.118 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 322], 00:12:12.118 | 99.00th=[ 404], 99.50th=[ 469], 99.90th=[ 947], 99.95th=[ 1844], 00:12:12.118 | 99.99th=[ 2737] 00:12:12.118 bw ( KiB/s): min=12599, max=17216, per=33.73%, avg=15823.57, stdev=1676.41, samples=7 00:12:12.118 iops : min= 3149, max= 4304, avg=3955.71, stdev=419.41, samples=7 00:12:12.118 lat (usec) : 250=67.42%, 500=32.13%, 750=0.29%, 1000=0.07% 00:12:12.118 lat (msec) : 2=0.03%, 4=0.04% 00:12:12.118 cpu : usr=1.18%, sys=4.71%, ctx=14676, majf=0, minf=2 00:12:12.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.118 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.118 issued rwts: total=14664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.118 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80695: Thu Nov 28 11:42:42 2024 00:12:12.119 read: IOPS=2796, BW=10.9MiB/s (11.5MB/s)(34.8MiB/3187msec) 00:12:12.119 slat (usec): min=10, max=11675, avg=22.96, stdev=149.57 00:12:12.119 clat (usec): min=181, max=3461, avg=332.40, stdev=91.95 00:12:12.119 lat (usec): min=195, max=12058, avg=355.36, stdev=176.05 00:12:12.119 clat percentiles (usec): 00:12:12.119 | 1.00th=[ 249], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:12:12.119 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 338], 00:12:12.119 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:12:12.119 | 99.00th=[ 494], 99.50th=[ 668], 99.90th=[ 2008], 99.95th=[ 2409], 00:12:12.119 | 99.99th=[ 3458] 00:12:12.119 bw ( KiB/s): min=10980, max=11488, per=24.01%, avg=11263.33, stdev=237.73, samples=6 00:12:12.119 iops : min= 2745, max= 2872, avg=2815.83, stdev=59.43, samples=6 00:12:12.119 lat (usec) : 250=1.13%, 500=97.91%, 750=0.61%, 1000=0.17% 00:12:12.119 lat (msec) : 2=0.07%, 4=0.10% 00:12:12.119 cpu : usr=1.29%, sys=5.02%, ctx=8915, majf=0, minf=2 00:12:12.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.119 issued rwts: total=8911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.119 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80696: Thu Nov 28 11:42:42 2024 00:12:12.119 read: IOPS=2836, BW=11.1MiB/s (11.6MB/s)(32.4MiB/2922msec) 00:12:12.119 slat (nsec): min=12016, max=75331, avg=19323.61, stdev=5015.78 00:12:12.119 clat (usec): min=167, max=1743, avg=330.68, stdev=58.40 00:12:12.119 lat (usec): min=183, max=1760, avg=350.01, stdev=59.55 00:12:12.119 clat percentiles (usec): 00:12:12.119 | 1.00th=[ 253], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:12:12.119 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 334], 00:12:12.119 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:12:12.119 | 99.00th=[ 510], 99.50th=[ 603], 99.90th=[ 930], 99.95th=[ 1012], 00:12:12.119 | 99.99th=[ 1745] 00:12:12.119 bw ( KiB/s): min=11000, max=11608, per=24.23%, avg=11366.40, stdev=243.27, samples=5 00:12:12.119 iops : min= 2750, max= 2902, avg=2841.60, stdev=60.82, samples=5 00:12:12.119 lat (usec) : 250=0.88%, 500=98.00%, 750=0.89%, 1000=0.16% 00:12:12.119 lat (msec) : 2=0.06% 00:12:12.119 cpu : usr=1.16%, sys=4.96%, ctx=8306, majf=0, minf=2 00:12:12.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.119 issued rwts: total=8289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.119 00:12:12.119 Run status group 0 (all jobs): 00:12:12.119 READ: bw=45.8MiB/s (48.0MB/s), 10.9MiB/s-15.4MiB/s (11.5MB/s-16.2MB/s), io=170MiB (179MB), run=2922-3718msec 00:12:12.119 00:12:12.119 Disk stats (read/write): 00:12:12.119 nvme0n1: ios=11514/0, merge=0/0, ticks=3148/0, in_queue=3148, util=95.11% 00:12:12.119 nvme0n2: ios=14190/0, merge=0/0, ticks=3355/0, in_queue=3355, util=94.91% 00:12:12.119 nvme0n3: ios=8723/0, merge=0/0, ticks=2907/0, in_queue=2907, util=96.18% 00:12:12.119 nvme0n4: ios=8133/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.79% 00:12:12.119 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.119 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:12.377 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.377 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:12.637 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.637 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:12.895 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.895 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:13.153 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.153 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 80653 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.721 nvmf hotplug test: fio failed as expected 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:13.721 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.981 rmmod nvme_tcp 00:12:13.981 rmmod nvme_fabrics 00:12:13.981 rmmod nvme_keyring 00:12:13.981 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 80268 ']' 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 80268 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 80268 ']' 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 80268 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80268 00:12:13.981 killing process with pid 80268 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80268' 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 80268 00:12:13.981 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 80268 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.240 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:14.543 00:12:14.543 real 0m20.300s 00:12:14.543 user 1m17.725s 00:12:14.543 sys 0m9.147s 00:12:14.543 ************************************ 00:12:14.543 END TEST nvmf_fio_target 00:12:14.543 ************************************ 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.543 ************************************ 00:12:14.543 START TEST nvmf_bdevio 00:12:14.543 ************************************ 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:14.543 * Looking for test storage... 00:12:14.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.543 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:14.801 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.802 --rc genhtml_branch_coverage=1 00:12:14.802 --rc genhtml_function_coverage=1 00:12:14.802 --rc genhtml_legend=1 00:12:14.802 --rc geninfo_all_blocks=1 00:12:14.802 --rc geninfo_unexecuted_blocks=1 00:12:14.802 00:12:14.802 ' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.802 --rc genhtml_branch_coverage=1 00:12:14.802 --rc genhtml_function_coverage=1 00:12:14.802 --rc genhtml_legend=1 00:12:14.802 --rc geninfo_all_blocks=1 00:12:14.802 --rc geninfo_unexecuted_blocks=1 00:12:14.802 00:12:14.802 ' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.802 --rc genhtml_branch_coverage=1 00:12:14.802 --rc genhtml_function_coverage=1 00:12:14.802 --rc genhtml_legend=1 00:12:14.802 --rc geninfo_all_blocks=1 00:12:14.802 --rc geninfo_unexecuted_blocks=1 00:12:14.802 00:12:14.802 ' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.802 --rc genhtml_branch_coverage=1 00:12:14.802 --rc genhtml_function_coverage=1 00:12:14.802 --rc genhtml_legend=1 00:12:14.802 --rc geninfo_all_blocks=1 00:12:14.802 --rc geninfo_unexecuted_blocks=1 00:12:14.802 00:12:14.802 ' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.802 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.803 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:14.803 Cannot find device "nvmf_init_br" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:14.803 Cannot find device "nvmf_init_br2" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:14.803 Cannot find device "nvmf_tgt_br" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.803 Cannot find device "nvmf_tgt_br2" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:14.803 Cannot find device "nvmf_init_br" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:14.803 Cannot find device "nvmf_init_br2" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:14.803 Cannot find device "nvmf_tgt_br" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:14.803 Cannot find device "nvmf_tgt_br2" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:14.803 Cannot find device "nvmf_br" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:14.803 Cannot find device "nvmf_init_if" 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:14.803 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:15.065 Cannot find device "nvmf_init_if2" 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.065 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.065 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:12:15.066 00:12:15.066 --- 10.0.0.3 ping statistics --- 00:12:15.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.066 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.066 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.066 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:12:15.066 00:12:15.066 --- 10.0.0.4 ping statistics --- 00:12:15.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.066 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:12:15.066 00:12:15.066 --- 10.0.0.1 ping statistics --- 00:12:15.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.066 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:15.066 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:15.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:15.326 00:12:15.326 --- 10.0.0.2 ping statistics --- 00:12:15.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.326 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:15.326 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.326 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:15.326 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.326 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=81020 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 81020 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 81020 ']' 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.327 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.327 [2024-11-28 11:42:45.285465] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:15.327 [2024-11-28 11:42:45.285554] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.327 [2024-11-28 11:42:45.416510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.327 [2024-11-28 11:42:45.448145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.589 [2024-11-28 11:42:45.519027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.589 [2024-11-28 11:42:45.519625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.589 [2024-11-28 11:42:45.520145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.589 [2024-11-28 11:42:45.520728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.589 [2024-11-28 11:42:45.521048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.589 [2024-11-28 11:42:45.522936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.589 [2024-11-28 11:42:45.523031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:15.589 [2024-11-28 11:42:45.523158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:15.589 [2024-11-28 11:42:45.523166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.589 [2024-11-28 11:42:45.601229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.548 [2024-11-28 11:42:46.417669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.548 Malloc0 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.548 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.549 [2024-11-28 11:42:46.497955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:16.549 { 00:12:16.549 "params": { 00:12:16.549 "name": "Nvme$subsystem", 00:12:16.549 "trtype": "$TEST_TRANSPORT", 00:12:16.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.549 "adrfam": "ipv4", 00:12:16.549 "trsvcid": "$NVMF_PORT", 00:12:16.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.549 "hdgst": ${hdgst:-false}, 00:12:16.549 "ddgst": ${ddgst:-false} 00:12:16.549 }, 00:12:16.549 "method": "bdev_nvme_attach_controller" 00:12:16.549 } 00:12:16.549 EOF 00:12:16.549 )") 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:16.549 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:16.549 "params": { 00:12:16.549 "name": "Nvme1", 00:12:16.549 "trtype": "tcp", 00:12:16.549 "traddr": "10.0.0.3", 00:12:16.549 "adrfam": "ipv4", 00:12:16.549 "trsvcid": "4420", 00:12:16.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.549 "hdgst": false, 00:12:16.549 "ddgst": false 00:12:16.549 }, 00:12:16.549 "method": "bdev_nvme_attach_controller" 00:12:16.549 }' 00:12:16.549 [2024-11-28 11:42:46.564672] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:12:16.549 [2024-11-28 11:42:46.564779] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81057 ] 00:12:16.807 [2024-11-28 11:42:46.696625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:16.807 [2024-11-28 11:42:46.729223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.807 [2024-11-28 11:42:46.803820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.807 [2024-11-28 11:42:46.803970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.807 [2024-11-28 11:42:46.803982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.807 [2024-11-28 11:42:46.892906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.066 I/O targets: 00:12:17.066 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:17.066 00:12:17.066 00:12:17.066 CUnit - A unit testing framework for C - Version 2.1-3 00:12:17.066 http://cunit.sourceforge.net/ 00:12:17.066 00:12:17.066 00:12:17.066 Suite: bdevio tests on: Nvme1n1 00:12:17.066 Test: blockdev write read block ...passed 00:12:17.066 Test: blockdev write zeroes read block ...passed 00:12:17.066 Test: blockdev write zeroes read no split ...passed 00:12:17.066 Test: blockdev write zeroes read split ...passed 00:12:17.066 Test: blockdev write zeroes read split partial ...passed 00:12:17.066 Test: blockdev reset ...[2024-11-28 11:42:47.061868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:17.066 [2024-11-28 11:42:47.062005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x528e80 (9): Bad file descriptor 00:12:17.066 passed 00:12:17.066 Test: blockdev write read 8 blocks ...[2024-11-28 11:42:47.074603] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:17.066 passed 00:12:17.066 Test: blockdev write read size > 128k ...passed 00:12:17.066 Test: blockdev write read invalid size ...passed 00:12:17.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.066 Test: blockdev write read max offset ...passed 00:12:17.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.066 Test: blockdev writev readv 8 blocks ...passed 00:12:17.066 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.066 Test: blockdev writev readv block ...passed 00:12:17.066 Test: blockdev writev readv size > 128k ...passed 00:12:17.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.066 Test: blockdev comparev and writev ...[2024-11-28 11:42:47.084428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.084644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.084679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.084694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.085064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.085093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.085128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.085526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.085553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.085576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.085589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.086022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.086059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.066 [2024-11-28 11:42:47.086094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:17.066 passed 00:12:17.066 Test: blockdev nvme passthru rw ...passed 00:12:17.066 Test: blockdev nvme passthru vendor specific ...[2024-11-28 11:42:47.087067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.066 [2024-11-28 11:42:47.087103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.087233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.066 [2024-11-28 11:42:47.087455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:12:17.066 Test: blockdev nvme admin passthru ...qhd:002d p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.087734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.066 [2024-11-28 11:42:47.087764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:17.066 [2024-11-28 11:42:47.087924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.066 [2024-11-28 11:42:47.087949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:17.066 passed 00:12:17.066 Test: blockdev copy ...passed 00:12:17.066 00:12:17.066 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.066 suites 1 1 n/a 0 0 00:12:17.066 tests 23 23 23 0 0 00:12:17.066 asserts 152 152 152 0 n/a 00:12:17.066 00:12:17.066 Elapsed time = 0.151 seconds 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.325 rmmod nvme_tcp 00:12:17.325 rmmod nvme_fabrics 00:12:17.325 rmmod nvme_keyring 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 81020 ']' 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 81020 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 81020 ']' 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 81020 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:17.325 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81020 00:12:17.584 killing process with pid 81020 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81020' 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 81020 00:12:17.584 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 81020 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:17.843 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:18.102 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:18.102 00:12:18.102 real 0m3.505s 00:12:18.102 user 0m10.709s 00:12:18.102 sys 0m1.089s 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.102 ************************************ 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.102 END TEST nvmf_bdevio 00:12:18.102 ************************************ 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:18.102 ************************************ 00:12:18.102 END TEST nvmf_target_core 00:12:18.102 ************************************ 00:12:18.102 00:12:18.102 real 2m36.943s 00:12:18.102 user 6m55.435s 00:12:18.102 sys 0m52.063s 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.102 11:42:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:18.102 11:42:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.102 11:42:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.102 11:42:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.102 ************************************ 00:12:18.102 START TEST nvmf_target_extra 00:12:18.102 ************************************ 00:12:18.102 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:18.361 * Looking for test storage... 00:12:18.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.361 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.362 --rc genhtml_branch_coverage=1 00:12:18.362 --rc genhtml_function_coverage=1 00:12:18.362 --rc genhtml_legend=1 00:12:18.362 --rc geninfo_all_blocks=1 00:12:18.362 --rc geninfo_unexecuted_blocks=1 00:12:18.362 00:12:18.362 ' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.362 --rc genhtml_branch_coverage=1 00:12:18.362 --rc genhtml_function_coverage=1 00:12:18.362 --rc genhtml_legend=1 00:12:18.362 --rc geninfo_all_blocks=1 00:12:18.362 --rc geninfo_unexecuted_blocks=1 00:12:18.362 00:12:18.362 ' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.362 --rc genhtml_branch_coverage=1 00:12:18.362 --rc genhtml_function_coverage=1 00:12:18.362 --rc genhtml_legend=1 00:12:18.362 --rc geninfo_all_blocks=1 00:12:18.362 --rc geninfo_unexecuted_blocks=1 00:12:18.362 00:12:18.362 ' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.362 --rc genhtml_branch_coverage=1 00:12:18.362 --rc genhtml_function_coverage=1 00:12:18.362 --rc genhtml_legend=1 00:12:18.362 --rc geninfo_all_blocks=1 00:12:18.362 --rc geninfo_unexecuted_blocks=1 00:12:18.362 00:12:18.362 ' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:18.362 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:18.363 11:42:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:18.363 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.363 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.363 11:42:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.363 ************************************ 00:12:18.363 START TEST nvmf_auth_target 00:12:18.363 ************************************ 00:12:18.363 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:18.623 * Looking for test storage... 00:12:18.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.623 --rc genhtml_branch_coverage=1 00:12:18.623 --rc genhtml_function_coverage=1 00:12:18.623 --rc genhtml_legend=1 00:12:18.623 --rc geninfo_all_blocks=1 00:12:18.623 --rc geninfo_unexecuted_blocks=1 00:12:18.623 00:12:18.623 ' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.623 --rc genhtml_branch_coverage=1 00:12:18.623 --rc genhtml_function_coverage=1 00:12:18.623 --rc genhtml_legend=1 00:12:18.623 --rc geninfo_all_blocks=1 00:12:18.623 --rc geninfo_unexecuted_blocks=1 00:12:18.623 00:12:18.623 ' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.623 --rc genhtml_branch_coverage=1 00:12:18.623 --rc genhtml_function_coverage=1 00:12:18.623 --rc genhtml_legend=1 00:12:18.623 --rc geninfo_all_blocks=1 00:12:18.623 --rc geninfo_unexecuted_blocks=1 00:12:18.623 00:12:18.623 ' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.623 --rc genhtml_branch_coverage=1 00:12:18.623 --rc genhtml_function_coverage=1 00:12:18.623 --rc genhtml_legend=1 00:12:18.623 --rc geninfo_all_blocks=1 00:12:18.623 --rc geninfo_unexecuted_blocks=1 00:12:18.623 00:12:18.623 ' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.623 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.624 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:18.624 Cannot find device "nvmf_init_br" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:18.624 Cannot find device "nvmf_init_br2" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:18.624 Cannot find device "nvmf_tgt_br" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.624 Cannot find device "nvmf_tgt_br2" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:18.624 Cannot find device "nvmf_init_br" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:18.624 Cannot find device "nvmf_init_br2" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:18.624 Cannot find device "nvmf_tgt_br" 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:18.624 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:18.883 Cannot find device "nvmf_tgt_br2" 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:18.883 Cannot find device "nvmf_br" 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:18.883 Cannot find device "nvmf_init_if" 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:18.883 Cannot find device "nvmf_init_if2" 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.883 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:18.883 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:19.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:19.142 00:12:19.142 --- 10.0.0.3 ping statistics --- 00:12:19.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.142 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:19.142 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:19.142 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:12:19.142 00:12:19.142 --- 10.0.0.4 ping statistics --- 00:12:19.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.142 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:19.142 00:12:19.142 --- 10.0.0.1 ping statistics --- 00:12:19.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.142 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:19.142 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:19.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:19.142 00:12:19.142 --- 10.0.0.2 ping statistics --- 00:12:19.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.143 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81346 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81346 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81346 ']' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.143 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=81378 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.080 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e396a3e7cad6ccb55be91d4f43edb3c71f6c70cc733067b 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fbr 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e396a3e7cad6ccb55be91d4f43edb3c71f6c70cc733067b 0 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e396a3e7cad6ccb55be91d4f43edb3c71f6c70cc733067b 0 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e396a3e7cad6ccb55be91d4f43edb3c71f6c70cc733067b 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fbr 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fbr 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.fbr 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c40c5de6805edcb7ec08bdb19510f1dfbf7cea0f132e9fa339d7f535793a07e 00:12:20.081 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lkB 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c40c5de6805edcb7ec08bdb19510f1dfbf7cea0f132e9fa339d7f535793a07e 3 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c40c5de6805edcb7ec08bdb19510f1dfbf7cea0f132e9fa339d7f535793a07e 3 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c40c5de6805edcb7ec08bdb19510f1dfbf7cea0f132e9fa339d7f535793a07e 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lkB 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lkB 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.lkB 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.340 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c01ec33ba556ff813bc7b33aa11d30f 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tXx 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c01ec33ba556ff813bc7b33aa11d30f 1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c01ec33ba556ff813bc7b33aa11d30f 1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c01ec33ba556ff813bc7b33aa11d30f 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tXx 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tXx 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tXx 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=003c8ea5e44af2a8f8b1c37c236e0cb287525e5c47fd7036 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wn7 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 003c8ea5e44af2a8f8b1c37c236e0cb287525e5c47fd7036 2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 003c8ea5e44af2a8f8b1c37c236e0cb287525e5c47fd7036 2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=003c8ea5e44af2a8f8b1c37c236e0cb287525e5c47fd7036 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wn7 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wn7 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.wn7 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d7402999636e8da89c3b48ea01092063ccedff1824b5b40b 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wJK 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d7402999636e8da89c3b48ea01092063ccedff1824b5b40b 2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d7402999636e8da89c3b48ea01092063ccedff1824b5b40b 2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d7402999636e8da89c3b48ea01092063ccedff1824b5b40b 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:20.341 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.600 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wJK 00:12:20.600 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wJK 00:12:20.600 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wJK 00:12:20.600 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cd44958c958a8499db7c3b8ea737d10f 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Aw7 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cd44958c958a8499db7c3b8ea737d10f 1 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cd44958c958a8499db7c3b8ea737d10f 1 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cd44958c958a8499db7c3b8ea737d10f 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Aw7 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Aw7 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Aw7 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=89a2705999163357043670b63753257a286ed6ca2f89e84a517323a9705888a6 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cKe 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 89a2705999163357043670b63753257a286ed6ca2f89e84a517323a9705888a6 3 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 89a2705999163357043670b63753257a286ed6ca2f89e84a517323a9705888a6 3 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=89a2705999163357043670b63753257a286ed6ca2f89e84a517323a9705888a6 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cKe 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cKe 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.cKe 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 81346 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81346 ']' 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.601 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 81378 /var/tmp/host.sock 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81378 ']' 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.860 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.119 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fbr 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fbr 00:12:21.120 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fbr 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.lkB ]] 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lkB 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lkB 00:12:21.378 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lkB 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tXx 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tXx 00:12:21.946 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tXx 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.wn7 ]] 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wn7 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wn7 00:12:22.205 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wn7 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wJK 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wJK 00:12:22.464 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wJK 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Aw7 ]] 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Aw7 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Aw7 00:12:22.723 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Aw7 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cKe 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cKe 00:12:22.983 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cKe 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.551 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.118 00:12:24.118 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.118 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.118 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.377 { 00:12:24.377 "cntlid": 1, 00:12:24.377 "qid": 0, 00:12:24.377 "state": "enabled", 00:12:24.377 "thread": "nvmf_tgt_poll_group_000", 00:12:24.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:24.377 "listen_address": { 00:12:24.377 "trtype": "TCP", 00:12:24.377 "adrfam": "IPv4", 00:12:24.377 "traddr": "10.0.0.3", 00:12:24.377 "trsvcid": "4420" 00:12:24.377 }, 00:12:24.377 "peer_address": { 00:12:24.377 "trtype": "TCP", 00:12:24.377 "adrfam": "IPv4", 00:12:24.377 "traddr": "10.0.0.1", 00:12:24.377 "trsvcid": "35010" 00:12:24.377 }, 00:12:24.377 "auth": { 00:12:24.377 "state": "completed", 00:12:24.377 "digest": "sha256", 00:12:24.377 "dhgroup": "null" 00:12:24.377 } 00:12:24.377 } 00:12:24.377 ]' 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:24.377 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.636 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.636 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.636 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.895 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:24.895 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.208 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.208 11:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.208 { 00:12:30.208 "cntlid": 3, 00:12:30.208 "qid": 0, 00:12:30.208 "state": "enabled", 00:12:30.208 "thread": "nvmf_tgt_poll_group_000", 00:12:30.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:30.208 "listen_address": { 00:12:30.208 "trtype": "TCP", 00:12:30.208 "adrfam": "IPv4", 00:12:30.208 "traddr": "10.0.0.3", 00:12:30.208 "trsvcid": "4420" 00:12:30.208 }, 00:12:30.208 "peer_address": { 00:12:30.208 "trtype": "TCP", 00:12:30.208 "adrfam": "IPv4", 00:12:30.208 "traddr": "10.0.0.1", 00:12:30.208 "trsvcid": "35050" 00:12:30.208 }, 00:12:30.208 "auth": { 00:12:30.208 "state": "completed", 00:12:30.208 "digest": "sha256", 00:12:30.208 "dhgroup": "null" 00:12:30.208 } 00:12:30.208 } 00:12:30.208 ]' 00:12:30.208 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.466 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.466 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.466 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:30.466 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.467 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.467 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.467 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.725 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:30.725 11:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:31.293 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:31.861 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.862 11:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.120 00:12:32.121 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.121 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.121 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.380 { 00:12:32.380 "cntlid": 5, 00:12:32.380 "qid": 0, 00:12:32.380 "state": "enabled", 00:12:32.380 "thread": "nvmf_tgt_poll_group_000", 00:12:32.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:32.380 "listen_address": { 00:12:32.380 "trtype": "TCP", 00:12:32.380 "adrfam": "IPv4", 00:12:32.380 "traddr": "10.0.0.3", 00:12:32.380 "trsvcid": "4420" 00:12:32.380 }, 00:12:32.380 "peer_address": { 00:12:32.380 "trtype": "TCP", 00:12:32.380 "adrfam": "IPv4", 00:12:32.380 "traddr": "10.0.0.1", 00:12:32.380 "trsvcid": "47116" 00:12:32.380 }, 00:12:32.380 "auth": { 00:12:32.380 "state": "completed", 00:12:32.380 "digest": "sha256", 00:12:32.380 "dhgroup": "null" 00:12:32.380 } 00:12:32.380 } 00:12:32.380 ]' 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.380 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.639 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:32.639 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.639 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.639 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.639 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.898 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:32.898 11:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:33.466 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.057 11:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.057 00:12:34.316 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.316 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.316 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.576 { 00:12:34.576 "cntlid": 7, 00:12:34.576 "qid": 0, 00:12:34.576 "state": "enabled", 00:12:34.576 "thread": "nvmf_tgt_poll_group_000", 00:12:34.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:34.576 "listen_address": { 00:12:34.576 "trtype": "TCP", 00:12:34.576 "adrfam": "IPv4", 00:12:34.576 "traddr": "10.0.0.3", 00:12:34.576 "trsvcid": "4420" 00:12:34.576 }, 00:12:34.576 "peer_address": { 00:12:34.576 "trtype": "TCP", 00:12:34.576 "adrfam": "IPv4", 00:12:34.576 "traddr": "10.0.0.1", 00:12:34.576 "trsvcid": "47134" 00:12:34.576 }, 00:12:34.576 "auth": { 00:12:34.576 "state": "completed", 00:12:34.576 "digest": "sha256", 00:12:34.576 "dhgroup": "null" 00:12:34.576 } 00:12:34.576 } 00:12:34.576 ]' 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.576 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.835 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:34.835 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.772 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.340 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.340 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.599 { 00:12:36.599 "cntlid": 9, 00:12:36.599 "qid": 0, 00:12:36.599 "state": "enabled", 00:12:36.599 "thread": "nvmf_tgt_poll_group_000", 00:12:36.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:36.599 "listen_address": { 00:12:36.599 "trtype": "TCP", 00:12:36.599 "adrfam": "IPv4", 00:12:36.599 "traddr": "10.0.0.3", 00:12:36.599 "trsvcid": "4420" 00:12:36.599 }, 00:12:36.599 "peer_address": { 00:12:36.599 "trtype": "TCP", 00:12:36.599 "adrfam": "IPv4", 00:12:36.599 "traddr": "10.0.0.1", 00:12:36.599 "trsvcid": "47174" 00:12:36.599 }, 00:12:36.599 "auth": { 00:12:36.599 "state": "completed", 00:12:36.599 "digest": "sha256", 00:12:36.599 "dhgroup": "ffdhe2048" 00:12:36.599 } 00:12:36.599 } 00:12:36.599 ]' 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.599 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.857 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:36.857 11:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.794 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.053 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.312 00:12:38.312 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.312 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.312 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.572 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.572 { 00:12:38.572 "cntlid": 11, 00:12:38.572 "qid": 0, 00:12:38.572 "state": "enabled", 00:12:38.572 "thread": "nvmf_tgt_poll_group_000", 00:12:38.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:38.572 "listen_address": { 00:12:38.572 "trtype": "TCP", 00:12:38.572 "adrfam": "IPv4", 00:12:38.572 "traddr": "10.0.0.3", 00:12:38.572 "trsvcid": "4420" 00:12:38.572 }, 00:12:38.572 "peer_address": { 00:12:38.572 "trtype": "TCP", 00:12:38.572 "adrfam": "IPv4", 00:12:38.572 "traddr": "10.0.0.1", 00:12:38.572 "trsvcid": "47194" 00:12:38.572 }, 00:12:38.572 "auth": { 00:12:38.572 "state": "completed", 00:12:38.572 "digest": "sha256", 00:12:38.572 "dhgroup": "ffdhe2048" 00:12:38.572 } 00:12:38.572 } 00:12:38.572 ]' 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.831 11:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.090 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:39.090 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:39.658 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.917 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.176 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.435 00:12:40.435 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.435 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.435 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.694 { 00:12:40.694 "cntlid": 13, 00:12:40.694 "qid": 0, 00:12:40.694 "state": "enabled", 00:12:40.694 "thread": "nvmf_tgt_poll_group_000", 00:12:40.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:40.694 "listen_address": { 00:12:40.694 "trtype": "TCP", 00:12:40.694 "adrfam": "IPv4", 00:12:40.694 "traddr": "10.0.0.3", 00:12:40.694 "trsvcid": "4420" 00:12:40.694 }, 00:12:40.694 "peer_address": { 00:12:40.694 "trtype": "TCP", 00:12:40.694 "adrfam": "IPv4", 00:12:40.694 "traddr": "10.0.0.1", 00:12:40.694 "trsvcid": "47206" 00:12:40.694 }, 00:12:40.694 "auth": { 00:12:40.694 "state": "completed", 00:12:40.694 "digest": "sha256", 00:12:40.694 "dhgroup": "ffdhe2048" 00:12:40.694 } 00:12:40.694 } 00:12:40.694 ]' 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.694 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.695 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.695 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.695 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.954 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.954 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.954 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.213 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:41.213 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:41.782 11:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.041 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.659 00:12:42.659 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.659 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.660 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.918 { 00:12:42.918 "cntlid": 15, 00:12:42.918 "qid": 0, 00:12:42.918 "state": "enabled", 00:12:42.918 "thread": "nvmf_tgt_poll_group_000", 00:12:42.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:42.918 "listen_address": { 00:12:42.918 "trtype": "TCP", 00:12:42.918 "adrfam": "IPv4", 00:12:42.918 "traddr": "10.0.0.3", 00:12:42.918 "trsvcid": "4420" 00:12:42.918 }, 00:12:42.918 "peer_address": { 00:12:42.918 "trtype": "TCP", 00:12:42.918 "adrfam": "IPv4", 00:12:42.918 "traddr": "10.0.0.1", 00:12:42.918 "trsvcid": "49862" 00:12:42.918 }, 00:12:42.918 "auth": { 00:12:42.918 "state": "completed", 00:12:42.918 "digest": "sha256", 00:12:42.918 "dhgroup": "ffdhe2048" 00:12:42.918 } 00:12:42.918 } 00:12:42.918 ]' 00:12:42.918 11:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.918 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.918 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.177 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.177 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.177 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.177 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.177 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.435 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:43.435 11:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.372 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.940 00:12:44.940 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.940 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.940 11:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.197 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.197 { 00:12:45.197 "cntlid": 17, 00:12:45.197 "qid": 0, 00:12:45.198 "state": "enabled", 00:12:45.198 "thread": "nvmf_tgt_poll_group_000", 00:12:45.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:45.198 "listen_address": { 00:12:45.198 "trtype": "TCP", 00:12:45.198 "adrfam": "IPv4", 00:12:45.198 "traddr": "10.0.0.3", 00:12:45.198 "trsvcid": "4420" 00:12:45.198 }, 00:12:45.198 "peer_address": { 00:12:45.198 "trtype": "TCP", 00:12:45.198 "adrfam": "IPv4", 00:12:45.198 "traddr": "10.0.0.1", 00:12:45.198 "trsvcid": "49888" 00:12:45.198 }, 00:12:45.198 "auth": { 00:12:45.198 "state": "completed", 00:12:45.198 "digest": "sha256", 00:12:45.198 "dhgroup": "ffdhe3072" 00:12:45.198 } 00:12:45.198 } 00:12:45.198 ]' 00:12:45.198 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.198 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.198 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.198 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.198 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.456 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.456 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.456 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.715 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:45.715 11:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.283 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.852 11:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.111 00:12:47.111 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.111 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.111 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.370 { 00:12:47.370 "cntlid": 19, 00:12:47.370 "qid": 0, 00:12:47.370 "state": "enabled", 00:12:47.370 "thread": "nvmf_tgt_poll_group_000", 00:12:47.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:47.370 "listen_address": { 00:12:47.370 "trtype": "TCP", 00:12:47.370 "adrfam": "IPv4", 00:12:47.370 "traddr": "10.0.0.3", 00:12:47.370 "trsvcid": "4420" 00:12:47.370 }, 00:12:47.370 "peer_address": { 00:12:47.370 "trtype": "TCP", 00:12:47.370 "adrfam": "IPv4", 00:12:47.370 "traddr": "10.0.0.1", 00:12:47.370 "trsvcid": "49912" 00:12:47.370 }, 00:12:47.370 "auth": { 00:12:47.370 "state": "completed", 00:12:47.370 "digest": "sha256", 00:12:47.370 "dhgroup": "ffdhe3072" 00:12:47.370 } 00:12:47.370 } 00:12:47.370 ]' 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.370 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.629 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.629 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.629 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.887 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:47.887 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.824 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.390 00:12:49.390 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.390 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.390 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.710 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.711 { 00:12:49.711 "cntlid": 21, 00:12:49.711 "qid": 0, 00:12:49.711 "state": "enabled", 00:12:49.711 "thread": "nvmf_tgt_poll_group_000", 00:12:49.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:49.711 "listen_address": { 00:12:49.711 "trtype": "TCP", 00:12:49.711 "adrfam": "IPv4", 00:12:49.711 "traddr": "10.0.0.3", 00:12:49.711 "trsvcid": "4420" 00:12:49.711 }, 00:12:49.711 "peer_address": { 00:12:49.711 "trtype": "TCP", 00:12:49.711 "adrfam": "IPv4", 00:12:49.711 "traddr": "10.0.0.1", 00:12:49.711 "trsvcid": "49936" 00:12:49.711 }, 00:12:49.711 "auth": { 00:12:49.711 "state": "completed", 00:12:49.711 "digest": "sha256", 00:12:49.711 "dhgroup": "ffdhe3072" 00:12:49.711 } 00:12:49.711 } 00:12:49.711 ]' 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.711 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.970 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:49.970 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.538 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.797 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.055 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.055 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.055 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.055 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.313 00:12:51.313 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.313 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.313 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.572 { 00:12:51.572 "cntlid": 23, 00:12:51.572 "qid": 0, 00:12:51.572 "state": "enabled", 00:12:51.572 "thread": "nvmf_tgt_poll_group_000", 00:12:51.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:51.572 "listen_address": { 00:12:51.572 "trtype": "TCP", 00:12:51.572 "adrfam": "IPv4", 00:12:51.572 "traddr": "10.0.0.3", 00:12:51.572 "trsvcid": "4420" 00:12:51.572 }, 00:12:51.572 "peer_address": { 00:12:51.572 "trtype": "TCP", 00:12:51.572 "adrfam": "IPv4", 00:12:51.572 "traddr": "10.0.0.1", 00:12:51.572 "trsvcid": "49964" 00:12:51.572 }, 00:12:51.572 "auth": { 00:12:51.572 "state": "completed", 00:12:51.572 "digest": "sha256", 00:12:51.572 "dhgroup": "ffdhe3072" 00:12:51.572 } 00:12:51.572 } 00:12:51.572 ]' 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.572 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.831 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:51.831 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:52.766 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.026 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.286 00:12:53.286 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.286 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.286 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.546 { 00:12:53.546 "cntlid": 25, 00:12:53.546 "qid": 0, 00:12:53.546 "state": "enabled", 00:12:53.546 "thread": "nvmf_tgt_poll_group_000", 00:12:53.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:53.546 "listen_address": { 00:12:53.546 "trtype": "TCP", 00:12:53.546 "adrfam": "IPv4", 00:12:53.546 "traddr": "10.0.0.3", 00:12:53.546 "trsvcid": "4420" 00:12:53.546 }, 00:12:53.546 "peer_address": { 00:12:53.546 "trtype": "TCP", 00:12:53.546 "adrfam": "IPv4", 00:12:53.546 "traddr": "10.0.0.1", 00:12:53.546 "trsvcid": "33324" 00:12:53.546 }, 00:12:53.546 "auth": { 00:12:53.546 "state": "completed", 00:12:53.546 "digest": "sha256", 00:12:53.546 "dhgroup": "ffdhe4096" 00:12:53.546 } 00:12:53.546 } 00:12:53.546 ]' 00:12:53.546 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.805 11:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.064 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:54.064 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:54.634 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.894 11:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.462 00:12:55.462 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.462 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.462 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.722 { 00:12:55.722 "cntlid": 27, 00:12:55.722 "qid": 0, 00:12:55.722 "state": "enabled", 00:12:55.722 "thread": "nvmf_tgt_poll_group_000", 00:12:55.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:55.722 "listen_address": { 00:12:55.722 "trtype": "TCP", 00:12:55.722 "adrfam": "IPv4", 00:12:55.722 "traddr": "10.0.0.3", 00:12:55.722 "trsvcid": "4420" 00:12:55.722 }, 00:12:55.722 "peer_address": { 00:12:55.722 "trtype": "TCP", 00:12:55.722 "adrfam": "IPv4", 00:12:55.722 "traddr": "10.0.0.1", 00:12:55.722 "trsvcid": "33366" 00:12:55.722 }, 00:12:55.722 "auth": { 00:12:55.722 "state": "completed", 00:12:55.722 "digest": "sha256", 00:12:55.722 "dhgroup": "ffdhe4096" 00:12:55.722 } 00:12:55.722 } 00:12:55.722 ]' 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.722 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.723 11:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.982 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:55.982 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.940 11:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.940 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.510 00:12:57.510 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.510 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.510 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.770 { 00:12:57.770 "cntlid": 29, 00:12:57.770 "qid": 0, 00:12:57.770 "state": "enabled", 00:12:57.770 "thread": "nvmf_tgt_poll_group_000", 00:12:57.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:57.770 "listen_address": { 00:12:57.770 "trtype": "TCP", 00:12:57.770 "adrfam": "IPv4", 00:12:57.770 "traddr": "10.0.0.3", 00:12:57.770 "trsvcid": "4420" 00:12:57.770 }, 00:12:57.770 "peer_address": { 00:12:57.770 "trtype": "TCP", 00:12:57.770 "adrfam": "IPv4", 00:12:57.770 "traddr": "10.0.0.1", 00:12:57.770 "trsvcid": "33376" 00:12:57.770 }, 00:12:57.770 "auth": { 00:12:57.770 "state": "completed", 00:12:57.770 "digest": "sha256", 00:12:57.770 "dhgroup": "ffdhe4096" 00:12:57.770 } 00:12:57.770 } 00:12:57.770 ]' 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.770 11:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.029 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:58.029 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:58.967 11:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.226 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.486 00:12:59.486 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.486 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.486 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.746 { 00:12:59.746 "cntlid": 31, 00:12:59.746 "qid": 0, 00:12:59.746 "state": "enabled", 00:12:59.746 "thread": "nvmf_tgt_poll_group_000", 00:12:59.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:12:59.746 "listen_address": { 00:12:59.746 "trtype": "TCP", 00:12:59.746 "adrfam": "IPv4", 00:12:59.746 "traddr": "10.0.0.3", 00:12:59.746 "trsvcid": "4420" 00:12:59.746 }, 00:12:59.746 "peer_address": { 00:12:59.746 "trtype": "TCP", 00:12:59.746 "adrfam": "IPv4", 00:12:59.746 "traddr": "10.0.0.1", 00:12:59.746 "trsvcid": "33414" 00:12:59.746 }, 00:12:59.746 "auth": { 00:12:59.746 "state": "completed", 00:12:59.746 "digest": "sha256", 00:12:59.746 "dhgroup": "ffdhe4096" 00:12:59.746 } 00:12:59.746 } 00:12:59.746 ]' 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.746 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.006 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.006 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.006 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.006 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.006 11:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.264 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:00.264 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:00.833 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:01.093 11:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.354 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.613 00:13:01.613 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.613 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.613 11:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.182 { 00:13:02.182 "cntlid": 33, 00:13:02.182 "qid": 0, 00:13:02.182 "state": "enabled", 00:13:02.182 "thread": "nvmf_tgt_poll_group_000", 00:13:02.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:02.182 "listen_address": { 00:13:02.182 "trtype": "TCP", 00:13:02.182 "adrfam": "IPv4", 00:13:02.182 "traddr": "10.0.0.3", 00:13:02.182 "trsvcid": "4420" 00:13:02.182 }, 00:13:02.182 "peer_address": { 00:13:02.182 "trtype": "TCP", 00:13:02.182 "adrfam": "IPv4", 00:13:02.182 "traddr": "10.0.0.1", 00:13:02.182 "trsvcid": "41954" 00:13:02.182 }, 00:13:02.182 "auth": { 00:13:02.182 "state": "completed", 00:13:02.182 "digest": "sha256", 00:13:02.182 "dhgroup": "ffdhe6144" 00:13:02.182 } 00:13:02.182 } 00:13:02.182 ]' 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.182 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.183 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.441 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:02.441 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.378 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.637 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.204 00:13:04.204 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.204 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.204 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.463 { 00:13:04.463 "cntlid": 35, 00:13:04.463 "qid": 0, 00:13:04.463 "state": "enabled", 00:13:04.463 "thread": "nvmf_tgt_poll_group_000", 00:13:04.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:04.463 "listen_address": { 00:13:04.463 "trtype": "TCP", 00:13:04.463 "adrfam": "IPv4", 00:13:04.463 "traddr": "10.0.0.3", 00:13:04.463 "trsvcid": "4420" 00:13:04.463 }, 00:13:04.463 "peer_address": { 00:13:04.463 "trtype": "TCP", 00:13:04.463 "adrfam": "IPv4", 00:13:04.463 "traddr": "10.0.0.1", 00:13:04.463 "trsvcid": "41988" 00:13:04.463 }, 00:13:04.463 "auth": { 00:13:04.463 "state": "completed", 00:13:04.463 "digest": "sha256", 00:13:04.463 "dhgroup": "ffdhe6144" 00:13:04.463 } 00:13:04.463 } 00:13:04.463 ]' 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.463 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.722 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.722 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.722 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.981 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:04.981 11:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:05.566 11:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:06.133 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.134 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.702 00:13:06.702 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.702 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.702 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.961 { 00:13:06.961 "cntlid": 37, 00:13:06.961 "qid": 0, 00:13:06.961 "state": "enabled", 00:13:06.961 "thread": "nvmf_tgt_poll_group_000", 00:13:06.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:06.961 "listen_address": { 00:13:06.961 "trtype": "TCP", 00:13:06.961 "adrfam": "IPv4", 00:13:06.961 "traddr": "10.0.0.3", 00:13:06.961 "trsvcid": "4420" 00:13:06.961 }, 00:13:06.961 "peer_address": { 00:13:06.961 "trtype": "TCP", 00:13:06.961 "adrfam": "IPv4", 00:13:06.961 "traddr": "10.0.0.1", 00:13:06.961 "trsvcid": "42022" 00:13:06.961 }, 00:13:06.961 "auth": { 00:13:06.961 "state": "completed", 00:13:06.961 "digest": "sha256", 00:13:06.961 "dhgroup": "ffdhe6144" 00:13:06.961 } 00:13:06.961 } 00:13:06.961 ]' 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.961 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.962 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.962 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.962 11:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.962 11:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.962 11:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.962 11:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.221 11:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:07.221 11:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.156 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.415 11:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.983 00:13:08.983 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.983 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.983 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.242 { 00:13:09.242 "cntlid": 39, 00:13:09.242 "qid": 0, 00:13:09.242 "state": "enabled", 00:13:09.242 "thread": "nvmf_tgt_poll_group_000", 00:13:09.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:09.242 "listen_address": { 00:13:09.242 "trtype": "TCP", 00:13:09.242 "adrfam": "IPv4", 00:13:09.242 "traddr": "10.0.0.3", 00:13:09.242 "trsvcid": "4420" 00:13:09.242 }, 00:13:09.242 "peer_address": { 00:13:09.242 "trtype": "TCP", 00:13:09.242 "adrfam": "IPv4", 00:13:09.242 "traddr": "10.0.0.1", 00:13:09.242 "trsvcid": "42042" 00:13:09.242 }, 00:13:09.242 "auth": { 00:13:09.242 "state": "completed", 00:13:09.242 "digest": "sha256", 00:13:09.242 "dhgroup": "ffdhe6144" 00:13:09.242 } 00:13:09.242 } 00:13:09.242 ]' 00:13:09.242 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.501 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.779 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:09.780 11:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.714 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.973 11:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.539 00:13:11.539 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.539 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.539 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.105 11:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.105 { 00:13:12.105 "cntlid": 41, 00:13:12.105 "qid": 0, 00:13:12.105 "state": "enabled", 00:13:12.105 "thread": "nvmf_tgt_poll_group_000", 00:13:12.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:12.105 "listen_address": { 00:13:12.105 "trtype": "TCP", 00:13:12.105 "adrfam": "IPv4", 00:13:12.105 "traddr": "10.0.0.3", 00:13:12.105 "trsvcid": "4420" 00:13:12.105 }, 00:13:12.105 "peer_address": { 00:13:12.105 "trtype": "TCP", 00:13:12.105 "adrfam": "IPv4", 00:13:12.105 "traddr": "10.0.0.1", 00:13:12.105 "trsvcid": "37240" 00:13:12.105 }, 00:13:12.105 "auth": { 00:13:12.105 "state": "completed", 00:13:12.105 "digest": "sha256", 00:13:12.105 "dhgroup": "ffdhe8192" 00:13:12.105 } 00:13:12.105 } 00:13:12.105 ]' 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.105 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.675 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:12.675 11:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.241 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.501 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.069 00:13:14.069 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.069 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.069 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.327 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.327 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.327 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.328 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.328 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.328 { 00:13:14.328 "cntlid": 43, 00:13:14.328 "qid": 0, 00:13:14.328 "state": "enabled", 00:13:14.328 "thread": "nvmf_tgt_poll_group_000", 00:13:14.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:14.328 "listen_address": { 00:13:14.328 "trtype": "TCP", 00:13:14.328 "adrfam": "IPv4", 00:13:14.328 "traddr": "10.0.0.3", 00:13:14.328 "trsvcid": "4420" 00:13:14.328 }, 00:13:14.328 "peer_address": { 00:13:14.328 "trtype": "TCP", 00:13:14.328 "adrfam": "IPv4", 00:13:14.328 "traddr": "10.0.0.1", 00:13:14.328 "trsvcid": "37274" 00:13:14.328 }, 00:13:14.328 "auth": { 00:13:14.328 "state": "completed", 00:13:14.328 "digest": "sha256", 00:13:14.328 "dhgroup": "ffdhe8192" 00:13:14.328 } 00:13:14.328 } 00:13:14.328 ]' 00:13:14.328 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.587 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.846 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:14.846 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.793 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.731 00:13:16.731 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.731 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.731 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.991 { 00:13:16.991 "cntlid": 45, 00:13:16.991 "qid": 0, 00:13:16.991 "state": "enabled", 00:13:16.991 "thread": "nvmf_tgt_poll_group_000", 00:13:16.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:16.991 "listen_address": { 00:13:16.991 "trtype": "TCP", 00:13:16.991 "adrfam": "IPv4", 00:13:16.991 "traddr": "10.0.0.3", 00:13:16.991 "trsvcid": "4420" 00:13:16.991 }, 00:13:16.991 "peer_address": { 00:13:16.991 "trtype": "TCP", 00:13:16.991 "adrfam": "IPv4", 00:13:16.991 "traddr": "10.0.0.1", 00:13:16.991 "trsvcid": "37306" 00:13:16.991 }, 00:13:16.991 "auth": { 00:13:16.991 "state": "completed", 00:13:16.991 "digest": "sha256", 00:13:16.991 "dhgroup": "ffdhe8192" 00:13:16.991 } 00:13:16.991 } 00:13:16.991 ]' 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.991 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.991 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.991 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.991 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.991 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.991 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.251 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:17.251 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:18.188 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.188 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.189 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.447 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.015 00:13:19.015 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.015 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.015 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.276 { 00:13:19.276 "cntlid": 47, 00:13:19.276 "qid": 0, 00:13:19.276 "state": "enabled", 00:13:19.276 "thread": "nvmf_tgt_poll_group_000", 00:13:19.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:19.276 "listen_address": { 00:13:19.276 "trtype": "TCP", 00:13:19.276 "adrfam": "IPv4", 00:13:19.276 "traddr": "10.0.0.3", 00:13:19.276 "trsvcid": "4420" 00:13:19.276 }, 00:13:19.276 "peer_address": { 00:13:19.276 "trtype": "TCP", 00:13:19.276 "adrfam": "IPv4", 00:13:19.276 "traddr": "10.0.0.1", 00:13:19.276 "trsvcid": "37334" 00:13:19.276 }, 00:13:19.276 "auth": { 00:13:19.276 "state": "completed", 00:13:19.276 "digest": "sha256", 00:13:19.276 "dhgroup": "ffdhe8192" 00:13:19.276 } 00:13:19.276 } 00:13:19.276 ]' 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.276 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.536 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.536 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.536 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.536 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.536 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.796 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:19.796 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.732 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.991 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.249 00:13:21.249 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.249 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.249 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.507 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.507 { 00:13:21.507 "cntlid": 49, 00:13:21.507 "qid": 0, 00:13:21.507 "state": "enabled", 00:13:21.507 "thread": "nvmf_tgt_poll_group_000", 00:13:21.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:21.507 "listen_address": { 00:13:21.507 "trtype": "TCP", 00:13:21.507 "adrfam": "IPv4", 00:13:21.507 "traddr": "10.0.0.3", 00:13:21.507 "trsvcid": "4420" 00:13:21.507 }, 00:13:21.507 "peer_address": { 00:13:21.507 "trtype": "TCP", 00:13:21.507 "adrfam": "IPv4", 00:13:21.507 "traddr": "10.0.0.1", 00:13:21.508 "trsvcid": "56854" 00:13:21.508 }, 00:13:21.508 "auth": { 00:13:21.508 "state": "completed", 00:13:21.508 "digest": "sha384", 00:13:21.508 "dhgroup": "null" 00:13:21.508 } 00:13:21.508 } 00:13:21.508 ]' 00:13:21.508 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.766 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.024 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:22.024 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:22.591 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.591 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:22.591 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.591 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.850 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.850 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.850 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:22.850 11:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.109 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.368 00:13:23.368 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.368 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.368 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.628 { 00:13:23.628 "cntlid": 51, 00:13:23.628 "qid": 0, 00:13:23.628 "state": "enabled", 00:13:23.628 "thread": "nvmf_tgt_poll_group_000", 00:13:23.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:23.628 "listen_address": { 00:13:23.628 "trtype": "TCP", 00:13:23.628 "adrfam": "IPv4", 00:13:23.628 "traddr": "10.0.0.3", 00:13:23.628 "trsvcid": "4420" 00:13:23.628 }, 00:13:23.628 "peer_address": { 00:13:23.628 "trtype": "TCP", 00:13:23.628 "adrfam": "IPv4", 00:13:23.628 "traddr": "10.0.0.1", 00:13:23.628 "trsvcid": "56882" 00:13:23.628 }, 00:13:23.628 "auth": { 00:13:23.628 "state": "completed", 00:13:23.628 "digest": "sha384", 00:13:23.628 "dhgroup": "null" 00:13:23.628 } 00:13:23.628 } 00:13:23.628 ]' 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.628 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.887 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:23.887 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.887 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.887 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.887 11:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.147 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:24.147 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.716 11:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.975 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.234 00:13:25.234 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.234 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.234 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.492 { 00:13:25.492 "cntlid": 53, 00:13:25.492 "qid": 0, 00:13:25.492 "state": "enabled", 00:13:25.492 "thread": "nvmf_tgt_poll_group_000", 00:13:25.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:25.492 "listen_address": { 00:13:25.492 "trtype": "TCP", 00:13:25.492 "adrfam": "IPv4", 00:13:25.492 "traddr": "10.0.0.3", 00:13:25.492 "trsvcid": "4420" 00:13:25.492 }, 00:13:25.492 "peer_address": { 00:13:25.492 "trtype": "TCP", 00:13:25.492 "adrfam": "IPv4", 00:13:25.492 "traddr": "10.0.0.1", 00:13:25.492 "trsvcid": "56912" 00:13:25.492 }, 00:13:25.492 "auth": { 00:13:25.492 "state": "completed", 00:13:25.492 "digest": "sha384", 00:13:25.492 "dhgroup": "null" 00:13:25.492 } 00:13:25.492 } 00:13:25.492 ]' 00:13:25.492 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.751 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.752 11:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.011 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:26.011 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:26.722 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.982 11:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.243 00:13:27.243 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.243 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.243 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.502 { 00:13:27.502 "cntlid": 55, 00:13:27.502 "qid": 0, 00:13:27.502 "state": "enabled", 00:13:27.502 "thread": "nvmf_tgt_poll_group_000", 00:13:27.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:27.502 "listen_address": { 00:13:27.502 "trtype": "TCP", 00:13:27.502 "adrfam": "IPv4", 00:13:27.502 "traddr": "10.0.0.3", 00:13:27.502 "trsvcid": "4420" 00:13:27.502 }, 00:13:27.502 "peer_address": { 00:13:27.502 "trtype": "TCP", 00:13:27.502 "adrfam": "IPv4", 00:13:27.502 "traddr": "10.0.0.1", 00:13:27.502 "trsvcid": "56950" 00:13:27.502 }, 00:13:27.502 "auth": { 00:13:27.502 "state": "completed", 00:13:27.502 "digest": "sha384", 00:13:27.502 "dhgroup": "null" 00:13:27.502 } 00:13:27.502 } 00:13:27.502 ]' 00:13:27.502 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.761 11:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.021 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:28.021 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:28.589 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.589 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:28.589 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.589 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.848 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.848 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.848 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.848 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:28.848 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.107 11:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.365 00:13:29.365 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.365 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.365 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.624 { 00:13:29.624 "cntlid": 57, 00:13:29.624 "qid": 0, 00:13:29.624 "state": "enabled", 00:13:29.624 "thread": "nvmf_tgt_poll_group_000", 00:13:29.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:29.624 "listen_address": { 00:13:29.624 "trtype": "TCP", 00:13:29.624 "adrfam": "IPv4", 00:13:29.624 "traddr": "10.0.0.3", 00:13:29.624 "trsvcid": "4420" 00:13:29.624 }, 00:13:29.624 "peer_address": { 00:13:29.624 "trtype": "TCP", 00:13:29.624 "adrfam": "IPv4", 00:13:29.624 "traddr": "10.0.0.1", 00:13:29.624 "trsvcid": "56966" 00:13:29.624 }, 00:13:29.624 "auth": { 00:13:29.624 "state": "completed", 00:13:29.624 "digest": "sha384", 00:13:29.624 "dhgroup": "ffdhe2048" 00:13:29.624 } 00:13:29.624 } 00:13:29.624 ]' 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.624 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.883 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.883 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.883 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.883 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.883 11:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.142 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:30.142 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.710 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.969 11:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.537 00:13:31.537 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.537 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.537 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.797 { 00:13:31.797 "cntlid": 59, 00:13:31.797 "qid": 0, 00:13:31.797 "state": "enabled", 00:13:31.797 "thread": "nvmf_tgt_poll_group_000", 00:13:31.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:31.797 "listen_address": { 00:13:31.797 "trtype": "TCP", 00:13:31.797 "adrfam": "IPv4", 00:13:31.797 "traddr": "10.0.0.3", 00:13:31.797 "trsvcid": "4420" 00:13:31.797 }, 00:13:31.797 "peer_address": { 00:13:31.797 "trtype": "TCP", 00:13:31.797 "adrfam": "IPv4", 00:13:31.797 "traddr": "10.0.0.1", 00:13:31.797 "trsvcid": "36042" 00:13:31.797 }, 00:13:31.797 "auth": { 00:13:31.797 "state": "completed", 00:13:31.797 "digest": "sha384", 00:13:31.797 "dhgroup": "ffdhe2048" 00:13:31.797 } 00:13:31.797 } 00:13:31.797 ]' 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.797 11:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.056 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:32.056 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:32.623 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.623 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:32.623 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.623 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.882 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.882 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.882 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.882 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.144 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.413 00:13:33.413 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.413 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.413 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.672 { 00:13:33.672 "cntlid": 61, 00:13:33.672 "qid": 0, 00:13:33.672 "state": "enabled", 00:13:33.672 "thread": "nvmf_tgt_poll_group_000", 00:13:33.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:33.672 "listen_address": { 00:13:33.672 "trtype": "TCP", 00:13:33.672 "adrfam": "IPv4", 00:13:33.672 "traddr": "10.0.0.3", 00:13:33.672 "trsvcid": "4420" 00:13:33.672 }, 00:13:33.672 "peer_address": { 00:13:33.672 "trtype": "TCP", 00:13:33.672 "adrfam": "IPv4", 00:13:33.672 "traddr": "10.0.0.1", 00:13:33.672 "trsvcid": "36058" 00:13:33.672 }, 00:13:33.672 "auth": { 00:13:33.672 "state": "completed", 00:13:33.672 "digest": "sha384", 00:13:33.672 "dhgroup": "ffdhe2048" 00:13:33.672 } 00:13:33.672 } 00:13:33.672 ]' 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.672 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.932 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.932 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.932 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.932 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.932 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.191 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:34.191 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:34.759 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.326 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.584 00:13:35.584 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.584 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.584 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.843 { 00:13:35.843 "cntlid": 63, 00:13:35.843 "qid": 0, 00:13:35.843 "state": "enabled", 00:13:35.843 "thread": "nvmf_tgt_poll_group_000", 00:13:35.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:35.843 "listen_address": { 00:13:35.843 "trtype": "TCP", 00:13:35.843 "adrfam": "IPv4", 00:13:35.843 "traddr": "10.0.0.3", 00:13:35.843 "trsvcid": "4420" 00:13:35.843 }, 00:13:35.843 "peer_address": { 00:13:35.843 "trtype": "TCP", 00:13:35.843 "adrfam": "IPv4", 00:13:35.843 "traddr": "10.0.0.1", 00:13:35.843 "trsvcid": "36086" 00:13:35.843 }, 00:13:35.843 "auth": { 00:13:35.843 "state": "completed", 00:13:35.843 "digest": "sha384", 00:13:35.843 "dhgroup": "ffdhe2048" 00:13:35.843 } 00:13:35.843 } 00:13:35.843 ]' 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.843 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.411 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:36.411 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.979 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.238 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.496 00:13:37.496 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.496 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.496 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.755 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.755 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.755 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.755 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.014 { 00:13:38.014 "cntlid": 65, 00:13:38.014 "qid": 0, 00:13:38.014 "state": "enabled", 00:13:38.014 "thread": "nvmf_tgt_poll_group_000", 00:13:38.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:38.014 "listen_address": { 00:13:38.014 "trtype": "TCP", 00:13:38.014 "adrfam": "IPv4", 00:13:38.014 "traddr": "10.0.0.3", 00:13:38.014 "trsvcid": "4420" 00:13:38.014 }, 00:13:38.014 "peer_address": { 00:13:38.014 "trtype": "TCP", 00:13:38.014 "adrfam": "IPv4", 00:13:38.014 "traddr": "10.0.0.1", 00:13:38.014 "trsvcid": "36116" 00:13:38.014 }, 00:13:38.014 "auth": { 00:13:38.014 "state": "completed", 00:13:38.014 "digest": "sha384", 00:13:38.014 "dhgroup": "ffdhe3072" 00:13:38.014 } 00:13:38.014 } 00:13:38.014 ]' 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.014 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.014 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.014 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.014 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.273 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:38.273 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.208 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.208 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.482 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.482 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.482 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.483 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.741 00:13:39.741 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.741 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.741 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.999 { 00:13:39.999 "cntlid": 67, 00:13:39.999 "qid": 0, 00:13:39.999 "state": "enabled", 00:13:39.999 "thread": "nvmf_tgt_poll_group_000", 00:13:39.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:39.999 "listen_address": { 00:13:39.999 "trtype": "TCP", 00:13:39.999 "adrfam": "IPv4", 00:13:39.999 "traddr": "10.0.0.3", 00:13:39.999 "trsvcid": "4420" 00:13:39.999 }, 00:13:39.999 "peer_address": { 00:13:39.999 "trtype": "TCP", 00:13:39.999 "adrfam": "IPv4", 00:13:39.999 "traddr": "10.0.0.1", 00:13:39.999 "trsvcid": "36126" 00:13:39.999 }, 00:13:39.999 "auth": { 00:13:39.999 "state": "completed", 00:13:39.999 "digest": "sha384", 00:13:39.999 "dhgroup": "ffdhe3072" 00:13:39.999 } 00:13:39.999 } 00:13:39.999 ]' 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.999 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.258 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:40.258 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.258 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.258 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.258 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.516 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:40.516 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.082 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.650 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.909 00:13:41.909 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.909 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.909 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.167 { 00:13:42.167 "cntlid": 69, 00:13:42.167 "qid": 0, 00:13:42.167 "state": "enabled", 00:13:42.167 "thread": "nvmf_tgt_poll_group_000", 00:13:42.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:42.167 "listen_address": { 00:13:42.167 "trtype": "TCP", 00:13:42.167 "adrfam": "IPv4", 00:13:42.167 "traddr": "10.0.0.3", 00:13:42.167 "trsvcid": "4420" 00:13:42.167 }, 00:13:42.167 "peer_address": { 00:13:42.167 "trtype": "TCP", 00:13:42.167 "adrfam": "IPv4", 00:13:42.167 "traddr": "10.0.0.1", 00:13:42.167 "trsvcid": "53750" 00:13:42.167 }, 00:13:42.167 "auth": { 00:13:42.167 "state": "completed", 00:13:42.167 "digest": "sha384", 00:13:42.167 "dhgroup": "ffdhe3072" 00:13:42.167 } 00:13:42.167 } 00:13:42.167 ]' 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.167 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.425 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.425 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.425 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.684 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:42.684 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:43.619 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.878 11:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.136 00:13:44.136 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.136 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.136 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.394 { 00:13:44.394 "cntlid": 71, 00:13:44.394 "qid": 0, 00:13:44.394 "state": "enabled", 00:13:44.394 "thread": "nvmf_tgt_poll_group_000", 00:13:44.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:44.394 "listen_address": { 00:13:44.394 "trtype": "TCP", 00:13:44.394 "adrfam": "IPv4", 00:13:44.394 "traddr": "10.0.0.3", 00:13:44.394 "trsvcid": "4420" 00:13:44.394 }, 00:13:44.394 "peer_address": { 00:13:44.394 "trtype": "TCP", 00:13:44.394 "adrfam": "IPv4", 00:13:44.394 "traddr": "10.0.0.1", 00:13:44.394 "trsvcid": "53772" 00:13:44.394 }, 00:13:44.394 "auth": { 00:13:44.394 "state": "completed", 00:13:44.394 "digest": "sha384", 00:13:44.394 "dhgroup": "ffdhe3072" 00:13:44.394 } 00:13:44.394 } 00:13:44.394 ]' 00:13:44.394 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.652 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.911 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:44.911 11:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.938 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.939 11:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.507 00:13:46.507 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.507 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.507 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.766 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.766 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.767 { 00:13:46.767 "cntlid": 73, 00:13:46.767 "qid": 0, 00:13:46.767 "state": "enabled", 00:13:46.767 "thread": "nvmf_tgt_poll_group_000", 00:13:46.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:46.767 "listen_address": { 00:13:46.767 "trtype": "TCP", 00:13:46.767 "adrfam": "IPv4", 00:13:46.767 "traddr": "10.0.0.3", 00:13:46.767 "trsvcid": "4420" 00:13:46.767 }, 00:13:46.767 "peer_address": { 00:13:46.767 "trtype": "TCP", 00:13:46.767 "adrfam": "IPv4", 00:13:46.767 "traddr": "10.0.0.1", 00:13:46.767 "trsvcid": "53790" 00:13:46.767 }, 00:13:46.767 "auth": { 00:13:46.767 "state": "completed", 00:13:46.767 "digest": "sha384", 00:13:46.767 "dhgroup": "ffdhe4096" 00:13:46.767 } 00:13:46.767 } 00:13:46.767 ]' 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.767 11:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.335 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:47.335 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.902 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.903 11:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.162 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.730 00:13:48.730 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.730 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.730 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.989 { 00:13:48.989 "cntlid": 75, 00:13:48.989 "qid": 0, 00:13:48.989 "state": "enabled", 00:13:48.989 "thread": "nvmf_tgt_poll_group_000", 00:13:48.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:48.989 "listen_address": { 00:13:48.989 "trtype": "TCP", 00:13:48.989 "adrfam": "IPv4", 00:13:48.989 "traddr": "10.0.0.3", 00:13:48.989 "trsvcid": "4420" 00:13:48.989 }, 00:13:48.989 "peer_address": { 00:13:48.989 "trtype": "TCP", 00:13:48.989 "adrfam": "IPv4", 00:13:48.989 "traddr": "10.0.0.1", 00:13:48.989 "trsvcid": "53816" 00:13:48.989 }, 00:13:48.989 "auth": { 00:13:48.989 "state": "completed", 00:13:48.989 "digest": "sha384", 00:13:48.989 "dhgroup": "ffdhe4096" 00:13:48.989 } 00:13:48.989 } 00:13:48.989 ]' 00:13:48.989 11:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.989 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.989 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.989 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.989 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.248 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.248 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.248 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.507 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:49.507 11:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:50.077 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.337 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.905 00:13:50.905 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.905 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.905 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.165 { 00:13:51.165 "cntlid": 77, 00:13:51.165 "qid": 0, 00:13:51.165 "state": "enabled", 00:13:51.165 "thread": "nvmf_tgt_poll_group_000", 00:13:51.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:51.165 "listen_address": { 00:13:51.165 "trtype": "TCP", 00:13:51.165 "adrfam": "IPv4", 00:13:51.165 "traddr": "10.0.0.3", 00:13:51.165 "trsvcid": "4420" 00:13:51.165 }, 00:13:51.165 "peer_address": { 00:13:51.165 "trtype": "TCP", 00:13:51.165 "adrfam": "IPv4", 00:13:51.165 "traddr": "10.0.0.1", 00:13:51.165 "trsvcid": "53844" 00:13:51.165 }, 00:13:51.165 "auth": { 00:13:51.165 "state": "completed", 00:13:51.165 "digest": "sha384", 00:13:51.165 "dhgroup": "ffdhe4096" 00:13:51.165 } 00:13:51.165 } 00:13:51.165 ]' 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.165 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.425 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.425 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.425 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.687 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:51.687 11:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:13:52.255 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.256 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.539 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.130 00:13:53.130 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.130 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.130 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.390 { 00:13:53.390 "cntlid": 79, 00:13:53.390 "qid": 0, 00:13:53.390 "state": "enabled", 00:13:53.390 "thread": "nvmf_tgt_poll_group_000", 00:13:53.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:53.390 "listen_address": { 00:13:53.390 "trtype": "TCP", 00:13:53.390 "adrfam": "IPv4", 00:13:53.390 "traddr": "10.0.0.3", 00:13:53.390 "trsvcid": "4420" 00:13:53.390 }, 00:13:53.390 "peer_address": { 00:13:53.390 "trtype": "TCP", 00:13:53.390 "adrfam": "IPv4", 00:13:53.390 "traddr": "10.0.0.1", 00:13:53.390 "trsvcid": "43550" 00:13:53.390 }, 00:13:53.390 "auth": { 00:13:53.390 "state": "completed", 00:13:53.390 "digest": "sha384", 00:13:53.390 "dhgroup": "ffdhe4096" 00:13:53.390 } 00:13:53.390 } 00:13:53.390 ]' 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.390 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.649 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:53.649 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.587 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.176 00:13:55.176 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.176 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.176 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.443 { 00:13:55.443 "cntlid": 81, 00:13:55.443 "qid": 0, 00:13:55.443 "state": "enabled", 00:13:55.443 "thread": "nvmf_tgt_poll_group_000", 00:13:55.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:55.443 "listen_address": { 00:13:55.443 "trtype": "TCP", 00:13:55.443 "adrfam": "IPv4", 00:13:55.443 "traddr": "10.0.0.3", 00:13:55.443 "trsvcid": "4420" 00:13:55.443 }, 00:13:55.443 "peer_address": { 00:13:55.443 "trtype": "TCP", 00:13:55.443 "adrfam": "IPv4", 00:13:55.443 "traddr": "10.0.0.1", 00:13:55.443 "trsvcid": "43572" 00:13:55.443 }, 00:13:55.443 "auth": { 00:13:55.443 "state": "completed", 00:13:55.443 "digest": "sha384", 00:13:55.443 "dhgroup": "ffdhe6144" 00:13:55.443 } 00:13:55.443 } 00:13:55.443 ]' 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.443 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.702 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.702 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.702 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.702 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.702 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.962 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:55.962 11:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:13:56.530 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.530 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:56.530 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.530 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.788 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.788 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.788 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:56.788 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:57.047 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.048 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.307 00:13:57.566 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.566 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.566 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.825 { 00:13:57.825 "cntlid": 83, 00:13:57.825 "qid": 0, 00:13:57.825 "state": "enabled", 00:13:57.825 "thread": "nvmf_tgt_poll_group_000", 00:13:57.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:13:57.825 "listen_address": { 00:13:57.825 "trtype": "TCP", 00:13:57.825 "adrfam": "IPv4", 00:13:57.825 "traddr": "10.0.0.3", 00:13:57.825 "trsvcid": "4420" 00:13:57.825 }, 00:13:57.825 "peer_address": { 00:13:57.825 "trtype": "TCP", 00:13:57.825 "adrfam": "IPv4", 00:13:57.825 "traddr": "10.0.0.1", 00:13:57.825 "trsvcid": "43606" 00:13:57.825 }, 00:13:57.825 "auth": { 00:13:57.825 "state": "completed", 00:13:57.825 "digest": "sha384", 00:13:57.825 "dhgroup": "ffdhe6144" 00:13:57.825 } 00:13:57.825 } 00:13:57.825 ]' 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.825 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.826 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.084 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:58.084 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:59.021 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.280 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.539 00:13:59.539 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.539 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.539 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.106 { 00:14:00.106 "cntlid": 85, 00:14:00.106 "qid": 0, 00:14:00.106 "state": "enabled", 00:14:00.106 "thread": "nvmf_tgt_poll_group_000", 00:14:00.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:00.106 "listen_address": { 00:14:00.106 "trtype": "TCP", 00:14:00.106 "adrfam": "IPv4", 00:14:00.106 "traddr": "10.0.0.3", 00:14:00.106 "trsvcid": "4420" 00:14:00.106 }, 00:14:00.106 "peer_address": { 00:14:00.106 "trtype": "TCP", 00:14:00.106 "adrfam": "IPv4", 00:14:00.106 "traddr": "10.0.0.1", 00:14:00.106 "trsvcid": "43634" 00:14:00.106 }, 00:14:00.106 "auth": { 00:14:00.106 "state": "completed", 00:14:00.106 "digest": "sha384", 00:14:00.106 "dhgroup": "ffdhe6144" 00:14:00.106 } 00:14:00.106 } 00:14:00.106 ]' 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.106 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.106 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.106 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.106 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.106 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.106 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.364 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:00.364 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:00.932 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.191 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.192 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.758 00:14:01.758 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.758 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.758 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.016 { 00:14:02.016 "cntlid": 87, 00:14:02.016 "qid": 0, 00:14:02.016 "state": "enabled", 00:14:02.016 "thread": "nvmf_tgt_poll_group_000", 00:14:02.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:02.016 "listen_address": { 00:14:02.016 "trtype": "TCP", 00:14:02.016 "adrfam": "IPv4", 00:14:02.016 "traddr": "10.0.0.3", 00:14:02.016 "trsvcid": "4420" 00:14:02.016 }, 00:14:02.016 "peer_address": { 00:14:02.016 "trtype": "TCP", 00:14:02.016 "adrfam": "IPv4", 00:14:02.016 "traddr": "10.0.0.1", 00:14:02.016 "trsvcid": "33200" 00:14:02.016 }, 00:14:02.016 "auth": { 00:14:02.016 "state": "completed", 00:14:02.016 "digest": "sha384", 00:14:02.016 "dhgroup": "ffdhe6144" 00:14:02.016 } 00:14:02.016 } 00:14:02.016 ]' 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.016 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.274 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:02.274 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.274 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.274 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.274 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.533 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:02.533 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:03.102 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.669 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.236 00:14:04.236 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.236 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.236 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.496 { 00:14:04.496 "cntlid": 89, 00:14:04.496 "qid": 0, 00:14:04.496 "state": "enabled", 00:14:04.496 "thread": "nvmf_tgt_poll_group_000", 00:14:04.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:04.496 "listen_address": { 00:14:04.496 "trtype": "TCP", 00:14:04.496 "adrfam": "IPv4", 00:14:04.496 "traddr": "10.0.0.3", 00:14:04.496 "trsvcid": "4420" 00:14:04.496 }, 00:14:04.496 "peer_address": { 00:14:04.496 "trtype": "TCP", 00:14:04.496 "adrfam": "IPv4", 00:14:04.496 "traddr": "10.0.0.1", 00:14:04.496 "trsvcid": "33238" 00:14:04.496 }, 00:14:04.496 "auth": { 00:14:04.496 "state": "completed", 00:14:04.496 "digest": "sha384", 00:14:04.496 "dhgroup": "ffdhe8192" 00:14:04.496 } 00:14:04.496 } 00:14:04.496 ]' 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.496 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.755 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:04.755 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.691 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.951 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.951 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.951 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.951 11:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.519 00:14:06.519 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.519 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.519 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.779 { 00:14:06.779 "cntlid": 91, 00:14:06.779 "qid": 0, 00:14:06.779 "state": "enabled", 00:14:06.779 "thread": "nvmf_tgt_poll_group_000", 00:14:06.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:06.779 "listen_address": { 00:14:06.779 "trtype": "TCP", 00:14:06.779 "adrfam": "IPv4", 00:14:06.779 "traddr": "10.0.0.3", 00:14:06.779 "trsvcid": "4420" 00:14:06.779 }, 00:14:06.779 "peer_address": { 00:14:06.779 "trtype": "TCP", 00:14:06.779 "adrfam": "IPv4", 00:14:06.779 "traddr": "10.0.0.1", 00:14:06.779 "trsvcid": "33264" 00:14:06.779 }, 00:14:06.779 "auth": { 00:14:06.779 "state": "completed", 00:14:06.779 "digest": "sha384", 00:14:06.779 "dhgroup": "ffdhe8192" 00:14:06.779 } 00:14:06.779 } 00:14:06.779 ]' 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.779 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.347 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:07.347 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:07.959 11:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.218 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.787 00:14:08.787 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.787 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.787 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.046 { 00:14:09.046 "cntlid": 93, 00:14:09.046 "qid": 0, 00:14:09.046 "state": "enabled", 00:14:09.046 "thread": "nvmf_tgt_poll_group_000", 00:14:09.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:09.046 "listen_address": { 00:14:09.046 "trtype": "TCP", 00:14:09.046 "adrfam": "IPv4", 00:14:09.046 "traddr": "10.0.0.3", 00:14:09.046 "trsvcid": "4420" 00:14:09.046 }, 00:14:09.046 "peer_address": { 00:14:09.046 "trtype": "TCP", 00:14:09.046 "adrfam": "IPv4", 00:14:09.046 "traddr": "10.0.0.1", 00:14:09.046 "trsvcid": "33308" 00:14:09.046 }, 00:14:09.046 "auth": { 00:14:09.046 "state": "completed", 00:14:09.046 "digest": "sha384", 00:14:09.046 "dhgroup": "ffdhe8192" 00:14:09.046 } 00:14:09.046 } 00:14:09.046 ]' 00:14:09.046 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.305 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.567 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:09.567 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:10.135 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.702 11:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.268 00:14:11.268 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.268 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.268 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.526 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.526 { 00:14:11.526 "cntlid": 95, 00:14:11.526 "qid": 0, 00:14:11.526 "state": "enabled", 00:14:11.526 "thread": "nvmf_tgt_poll_group_000", 00:14:11.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:11.526 "listen_address": { 00:14:11.526 "trtype": "TCP", 00:14:11.526 "adrfam": "IPv4", 00:14:11.526 "traddr": "10.0.0.3", 00:14:11.526 "trsvcid": "4420" 00:14:11.526 }, 00:14:11.526 "peer_address": { 00:14:11.526 "trtype": "TCP", 00:14:11.526 "adrfam": "IPv4", 00:14:11.527 "traddr": "10.0.0.1", 00:14:11.527 "trsvcid": "33342" 00:14:11.527 }, 00:14:11.527 "auth": { 00:14:11.527 "state": "completed", 00:14:11.527 "digest": "sha384", 00:14:11.527 "dhgroup": "ffdhe8192" 00:14:11.527 } 00:14:11.527 } 00:14:11.527 ]' 00:14:11.527 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.527 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.527 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.527 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.527 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.785 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.785 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.785 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.043 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:12.043 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.610 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.869 11:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.127 00:14:13.127 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.127 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.127 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.385 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.385 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.385 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.385 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.643 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.643 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.643 { 00:14:13.643 "cntlid": 97, 00:14:13.643 "qid": 0, 00:14:13.643 "state": "enabled", 00:14:13.643 "thread": "nvmf_tgt_poll_group_000", 00:14:13.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:13.643 "listen_address": { 00:14:13.643 "trtype": "TCP", 00:14:13.643 "adrfam": "IPv4", 00:14:13.643 "traddr": "10.0.0.3", 00:14:13.644 "trsvcid": "4420" 00:14:13.644 }, 00:14:13.644 "peer_address": { 00:14:13.644 "trtype": "TCP", 00:14:13.644 "adrfam": "IPv4", 00:14:13.644 "traddr": "10.0.0.1", 00:14:13.644 "trsvcid": "36104" 00:14:13.644 }, 00:14:13.644 "auth": { 00:14:13.644 "state": "completed", 00:14:13.644 "digest": "sha512", 00:14:13.644 "dhgroup": "null" 00:14:13.644 } 00:14:13.644 } 00:14:13.644 ]' 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.644 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.903 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:13.903 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:14.842 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:15.101 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.102 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.361 00:14:15.361 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.361 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.361 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.620 { 00:14:15.620 "cntlid": 99, 00:14:15.620 "qid": 0, 00:14:15.620 "state": "enabled", 00:14:15.620 "thread": "nvmf_tgt_poll_group_000", 00:14:15.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:15.620 "listen_address": { 00:14:15.620 "trtype": "TCP", 00:14:15.620 "adrfam": "IPv4", 00:14:15.620 "traddr": "10.0.0.3", 00:14:15.620 "trsvcid": "4420" 00:14:15.620 }, 00:14:15.620 "peer_address": { 00:14:15.620 "trtype": "TCP", 00:14:15.620 "adrfam": "IPv4", 00:14:15.620 "traddr": "10.0.0.1", 00:14:15.620 "trsvcid": "36128" 00:14:15.620 }, 00:14:15.620 "auth": { 00:14:15.620 "state": "completed", 00:14:15.620 "digest": "sha512", 00:14:15.620 "dhgroup": "null" 00:14:15.620 } 00:14:15.620 } 00:14:15.620 ]' 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.620 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.897 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.897 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.897 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.185 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:16.186 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.754 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.014 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.273 00:14:17.273 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.273 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.273 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.533 { 00:14:17.533 "cntlid": 101, 00:14:17.533 "qid": 0, 00:14:17.533 "state": "enabled", 00:14:17.533 "thread": "nvmf_tgt_poll_group_000", 00:14:17.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:17.533 "listen_address": { 00:14:17.533 "trtype": "TCP", 00:14:17.533 "adrfam": "IPv4", 00:14:17.533 "traddr": "10.0.0.3", 00:14:17.533 "trsvcid": "4420" 00:14:17.533 }, 00:14:17.533 "peer_address": { 00:14:17.533 "trtype": "TCP", 00:14:17.533 "adrfam": "IPv4", 00:14:17.533 "traddr": "10.0.0.1", 00:14:17.533 "trsvcid": "36160" 00:14:17.533 }, 00:14:17.533 "auth": { 00:14:17.533 "state": "completed", 00:14:17.533 "digest": "sha512", 00:14:17.533 "dhgroup": "null" 00:14:17.533 } 00:14:17.533 } 00:14:17.533 ]' 00:14:17.533 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.792 11:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.051 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:18.051 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:18.619 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:18.878 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.137 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.396 00:14:19.396 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.396 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.396 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.655 { 00:14:19.655 "cntlid": 103, 00:14:19.655 "qid": 0, 00:14:19.655 "state": "enabled", 00:14:19.655 "thread": "nvmf_tgt_poll_group_000", 00:14:19.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:19.655 "listen_address": { 00:14:19.655 "trtype": "TCP", 00:14:19.655 "adrfam": "IPv4", 00:14:19.655 "traddr": "10.0.0.3", 00:14:19.655 "trsvcid": "4420" 00:14:19.655 }, 00:14:19.655 "peer_address": { 00:14:19.655 "trtype": "TCP", 00:14:19.655 "adrfam": "IPv4", 00:14:19.655 "traddr": "10.0.0.1", 00:14:19.655 "trsvcid": "36178" 00:14:19.655 }, 00:14:19.655 "auth": { 00:14:19.655 "state": "completed", 00:14:19.655 "digest": "sha512", 00:14:19.655 "dhgroup": "null" 00:14:19.655 } 00:14:19.655 } 00:14:19.655 ]' 00:14:19.655 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.915 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.175 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:20.175 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:20.741 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.309 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.568 00:14:21.568 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.568 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.568 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.827 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.827 { 00:14:21.828 "cntlid": 105, 00:14:21.828 "qid": 0, 00:14:21.828 "state": "enabled", 00:14:21.828 "thread": "nvmf_tgt_poll_group_000", 00:14:21.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:21.828 "listen_address": { 00:14:21.828 "trtype": "TCP", 00:14:21.828 "adrfam": "IPv4", 00:14:21.828 "traddr": "10.0.0.3", 00:14:21.828 "trsvcid": "4420" 00:14:21.828 }, 00:14:21.828 "peer_address": { 00:14:21.828 "trtype": "TCP", 00:14:21.828 "adrfam": "IPv4", 00:14:21.828 "traddr": "10.0.0.1", 00:14:21.828 "trsvcid": "54834" 00:14:21.828 }, 00:14:21.828 "auth": { 00:14:21.828 "state": "completed", 00:14:21.828 "digest": "sha512", 00:14:21.828 "dhgroup": "ffdhe2048" 00:14:21.828 } 00:14:21.828 } 00:14:21.828 ]' 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.828 11:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.087 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:22.087 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:22.655 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.224 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.483 00:14:23.483 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.483 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.483 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.743 { 00:14:23.743 "cntlid": 107, 00:14:23.743 "qid": 0, 00:14:23.743 "state": "enabled", 00:14:23.743 "thread": "nvmf_tgt_poll_group_000", 00:14:23.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:23.743 "listen_address": { 00:14:23.743 "trtype": "TCP", 00:14:23.743 "adrfam": "IPv4", 00:14:23.743 "traddr": "10.0.0.3", 00:14:23.743 "trsvcid": "4420" 00:14:23.743 }, 00:14:23.743 "peer_address": { 00:14:23.743 "trtype": "TCP", 00:14:23.743 "adrfam": "IPv4", 00:14:23.743 "traddr": "10.0.0.1", 00:14:23.743 "trsvcid": "54858" 00:14:23.743 }, 00:14:23.743 "auth": { 00:14:23.743 "state": "completed", 00:14:23.743 "digest": "sha512", 00:14:23.743 "dhgroup": "ffdhe2048" 00:14:23.743 } 00:14:23.743 } 00:14:23.743 ]' 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.743 11:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.312 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:24.312 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:24.881 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.139 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.140 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.140 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.140 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.140 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.398 00:14:25.398 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.398 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.398 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.657 { 00:14:25.657 "cntlid": 109, 00:14:25.657 "qid": 0, 00:14:25.657 "state": "enabled", 00:14:25.657 "thread": "nvmf_tgt_poll_group_000", 00:14:25.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:25.657 "listen_address": { 00:14:25.657 "trtype": "TCP", 00:14:25.657 "adrfam": "IPv4", 00:14:25.657 "traddr": "10.0.0.3", 00:14:25.657 "trsvcid": "4420" 00:14:25.657 }, 00:14:25.657 "peer_address": { 00:14:25.657 "trtype": "TCP", 00:14:25.657 "adrfam": "IPv4", 00:14:25.657 "traddr": "10.0.0.1", 00:14:25.657 "trsvcid": "54884" 00:14:25.657 }, 00:14:25.657 "auth": { 00:14:25.657 "state": "completed", 00:14:25.657 "digest": "sha512", 00:14:25.657 "dhgroup": "ffdhe2048" 00:14:25.657 } 00:14:25.657 } 00:14:25.657 ]' 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.657 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.225 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:26.225 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:26.794 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.054 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.314 00:14:27.314 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.314 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.314 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.573 { 00:14:27.573 "cntlid": 111, 00:14:27.573 "qid": 0, 00:14:27.573 "state": "enabled", 00:14:27.573 "thread": "nvmf_tgt_poll_group_000", 00:14:27.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:27.573 "listen_address": { 00:14:27.573 "trtype": "TCP", 00:14:27.573 "adrfam": "IPv4", 00:14:27.573 "traddr": "10.0.0.3", 00:14:27.573 "trsvcid": "4420" 00:14:27.573 }, 00:14:27.573 "peer_address": { 00:14:27.573 "trtype": "TCP", 00:14:27.573 "adrfam": "IPv4", 00:14:27.573 "traddr": "10.0.0.1", 00:14:27.573 "trsvcid": "54908" 00:14:27.573 }, 00:14:27.573 "auth": { 00:14:27.573 "state": "completed", 00:14:27.573 "digest": "sha512", 00:14:27.573 "dhgroup": "ffdhe2048" 00:14:27.573 } 00:14:27.573 } 00:14:27.573 ]' 00:14:27.573 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.833 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.092 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:28.092 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:28.660 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.660 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:28.660 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.660 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.660 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.661 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.661 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.661 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:28.661 11:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.229 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.487 00:14:29.487 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.487 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.487 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.746 { 00:14:29.746 "cntlid": 113, 00:14:29.746 "qid": 0, 00:14:29.746 "state": "enabled", 00:14:29.746 "thread": "nvmf_tgt_poll_group_000", 00:14:29.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:29.746 "listen_address": { 00:14:29.746 "trtype": "TCP", 00:14:29.746 "adrfam": "IPv4", 00:14:29.746 "traddr": "10.0.0.3", 00:14:29.746 "trsvcid": "4420" 00:14:29.746 }, 00:14:29.746 "peer_address": { 00:14:29.746 "trtype": "TCP", 00:14:29.746 "adrfam": "IPv4", 00:14:29.746 "traddr": "10.0.0.1", 00:14:29.746 "trsvcid": "54930" 00:14:29.746 }, 00:14:29.746 "auth": { 00:14:29.746 "state": "completed", 00:14:29.746 "digest": "sha512", 00:14:29.746 "dhgroup": "ffdhe3072" 00:14:29.746 } 00:14:29.746 } 00:14:29.746 ]' 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.746 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.005 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.005 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.005 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.264 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:30.264 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.833 11:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.092 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.352 00:14:31.611 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.611 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.611 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.870 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.870 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.870 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.871 { 00:14:31.871 "cntlid": 115, 00:14:31.871 "qid": 0, 00:14:31.871 "state": "enabled", 00:14:31.871 "thread": "nvmf_tgt_poll_group_000", 00:14:31.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:31.871 "listen_address": { 00:14:31.871 "trtype": "TCP", 00:14:31.871 "adrfam": "IPv4", 00:14:31.871 "traddr": "10.0.0.3", 00:14:31.871 "trsvcid": "4420" 00:14:31.871 }, 00:14:31.871 "peer_address": { 00:14:31.871 "trtype": "TCP", 00:14:31.871 "adrfam": "IPv4", 00:14:31.871 "traddr": "10.0.0.1", 00:14:31.871 "trsvcid": "36328" 00:14:31.871 }, 00:14:31.871 "auth": { 00:14:31.871 "state": "completed", 00:14:31.871 "digest": "sha512", 00:14:31.871 "dhgroup": "ffdhe3072" 00:14:31.871 } 00:14:31.871 } 00:14:31.871 ]' 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.130 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:32.130 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.067 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.067 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.635 00:14:33.635 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.635 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.635 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.894 { 00:14:33.894 "cntlid": 117, 00:14:33.894 "qid": 0, 00:14:33.894 "state": "enabled", 00:14:33.894 "thread": "nvmf_tgt_poll_group_000", 00:14:33.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:33.894 "listen_address": { 00:14:33.894 "trtype": "TCP", 00:14:33.894 "adrfam": "IPv4", 00:14:33.894 "traddr": "10.0.0.3", 00:14:33.894 "trsvcid": "4420" 00:14:33.894 }, 00:14:33.894 "peer_address": { 00:14:33.894 "trtype": "TCP", 00:14:33.894 "adrfam": "IPv4", 00:14:33.894 "traddr": "10.0.0.1", 00:14:33.894 "trsvcid": "36354" 00:14:33.894 }, 00:14:33.894 "auth": { 00:14:33.894 "state": "completed", 00:14:33.894 "digest": "sha512", 00:14:33.894 "dhgroup": "ffdhe3072" 00:14:33.894 } 00:14:33.894 } 00:14:33.894 ]' 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.894 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.153 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.153 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.153 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.153 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.153 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.412 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:34.412 11:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:34.980 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.545 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.804 00:14:35.804 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.804 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.804 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.064 { 00:14:36.064 "cntlid": 119, 00:14:36.064 "qid": 0, 00:14:36.064 "state": "enabled", 00:14:36.064 "thread": "nvmf_tgt_poll_group_000", 00:14:36.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:36.064 "listen_address": { 00:14:36.064 "trtype": "TCP", 00:14:36.064 "adrfam": "IPv4", 00:14:36.064 "traddr": "10.0.0.3", 00:14:36.064 "trsvcid": "4420" 00:14:36.064 }, 00:14:36.064 "peer_address": { 00:14:36.064 "trtype": "TCP", 00:14:36.064 "adrfam": "IPv4", 00:14:36.064 "traddr": "10.0.0.1", 00:14:36.064 "trsvcid": "36372" 00:14:36.064 }, 00:14:36.064 "auth": { 00:14:36.064 "state": "completed", 00:14:36.064 "digest": "sha512", 00:14:36.064 "dhgroup": "ffdhe3072" 00:14:36.064 } 00:14:36.064 } 00:14:36.064 ]' 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.064 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.323 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.323 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.323 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.582 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:36.582 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:37.151 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.409 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.667 00:14:37.667 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.667 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.667 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.233 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.233 { 00:14:38.233 "cntlid": 121, 00:14:38.233 "qid": 0, 00:14:38.233 "state": "enabled", 00:14:38.233 "thread": "nvmf_tgt_poll_group_000", 00:14:38.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:38.233 "listen_address": { 00:14:38.233 "trtype": "TCP", 00:14:38.233 "adrfam": "IPv4", 00:14:38.233 "traddr": "10.0.0.3", 00:14:38.233 "trsvcid": "4420" 00:14:38.233 }, 00:14:38.233 "peer_address": { 00:14:38.233 "trtype": "TCP", 00:14:38.233 "adrfam": "IPv4", 00:14:38.233 "traddr": "10.0.0.1", 00:14:38.233 "trsvcid": "36406" 00:14:38.233 }, 00:14:38.233 "auth": { 00:14:38.233 "state": "completed", 00:14:38.233 "digest": "sha512", 00:14:38.233 "dhgroup": "ffdhe4096" 00:14:38.233 } 00:14:38.233 } 00:14:38.233 ]' 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.234 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.492 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:38.492 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.429 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.430 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.430 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.430 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.430 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.430 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.998 00:14:39.998 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.998 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.998 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.257 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.257 { 00:14:40.257 "cntlid": 123, 00:14:40.257 "qid": 0, 00:14:40.257 "state": "enabled", 00:14:40.257 "thread": "nvmf_tgt_poll_group_000", 00:14:40.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:40.257 "listen_address": { 00:14:40.257 "trtype": "TCP", 00:14:40.257 "adrfam": "IPv4", 00:14:40.257 "traddr": "10.0.0.3", 00:14:40.257 "trsvcid": "4420" 00:14:40.257 }, 00:14:40.257 "peer_address": { 00:14:40.257 "trtype": "TCP", 00:14:40.258 "adrfam": "IPv4", 00:14:40.258 "traddr": "10.0.0.1", 00:14:40.258 "trsvcid": "36432" 00:14:40.258 }, 00:14:40.258 "auth": { 00:14:40.258 "state": "completed", 00:14:40.258 "digest": "sha512", 00:14:40.258 "dhgroup": "ffdhe4096" 00:14:40.258 } 00:14:40.258 } 00:14:40.258 ]' 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.258 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.517 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:40.517 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.453 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.454 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.022 00:14:42.022 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.022 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.022 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.281 { 00:14:42.281 "cntlid": 125, 00:14:42.281 "qid": 0, 00:14:42.281 "state": "enabled", 00:14:42.281 "thread": "nvmf_tgt_poll_group_000", 00:14:42.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:42.281 "listen_address": { 00:14:42.281 "trtype": "TCP", 00:14:42.281 "adrfam": "IPv4", 00:14:42.281 "traddr": "10.0.0.3", 00:14:42.281 "trsvcid": "4420" 00:14:42.281 }, 00:14:42.281 "peer_address": { 00:14:42.281 "trtype": "TCP", 00:14:42.281 "adrfam": "IPv4", 00:14:42.281 "traddr": "10.0.0.1", 00:14:42.281 "trsvcid": "56862" 00:14:42.281 }, 00:14:42.281 "auth": { 00:14:42.281 "state": "completed", 00:14:42.281 "digest": "sha512", 00:14:42.281 "dhgroup": "ffdhe4096" 00:14:42.281 } 00:14:42.281 } 00:14:42.281 ]' 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.281 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.540 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.540 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.540 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.799 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:42.799 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:43.367 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.367 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:43.367 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.367 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.368 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.368 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.368 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.627 11:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.196 00:14:44.196 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.196 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.196 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.454 { 00:14:44.454 "cntlid": 127, 00:14:44.454 "qid": 0, 00:14:44.454 "state": "enabled", 00:14:44.454 "thread": "nvmf_tgt_poll_group_000", 00:14:44.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:44.454 "listen_address": { 00:14:44.454 "trtype": "TCP", 00:14:44.454 "adrfam": "IPv4", 00:14:44.454 "traddr": "10.0.0.3", 00:14:44.454 "trsvcid": "4420" 00:14:44.454 }, 00:14:44.454 "peer_address": { 00:14:44.454 "trtype": "TCP", 00:14:44.454 "adrfam": "IPv4", 00:14:44.454 "traddr": "10.0.0.1", 00:14:44.454 "trsvcid": "56888" 00:14:44.454 }, 00:14:44.454 "auth": { 00:14:44.454 "state": "completed", 00:14:44.454 "digest": "sha512", 00:14:44.454 "dhgroup": "ffdhe4096" 00:14:44.454 } 00:14:44.454 } 00:14:44.454 ]' 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.454 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.713 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.713 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.713 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.971 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:44.971 11:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.537 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.796 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.366 00:14:46.366 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.366 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.366 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.625 { 00:14:46.625 "cntlid": 129, 00:14:46.625 "qid": 0, 00:14:46.625 "state": "enabled", 00:14:46.625 "thread": "nvmf_tgt_poll_group_000", 00:14:46.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:46.625 "listen_address": { 00:14:46.625 "trtype": "TCP", 00:14:46.625 "adrfam": "IPv4", 00:14:46.625 "traddr": "10.0.0.3", 00:14:46.625 "trsvcid": "4420" 00:14:46.625 }, 00:14:46.625 "peer_address": { 00:14:46.625 "trtype": "TCP", 00:14:46.625 "adrfam": "IPv4", 00:14:46.625 "traddr": "10.0.0.1", 00:14:46.625 "trsvcid": "56910" 00:14:46.625 }, 00:14:46.625 "auth": { 00:14:46.625 "state": "completed", 00:14:46.625 "digest": "sha512", 00:14:46.625 "dhgroup": "ffdhe6144" 00:14:46.625 } 00:14:46.625 } 00:14:46.625 ]' 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.625 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.884 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:46.884 11:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.450 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.709 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.968 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.968 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.968 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.968 11:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.227 00:14:48.227 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.227 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.227 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.487 { 00:14:48.487 "cntlid": 131, 00:14:48.487 "qid": 0, 00:14:48.487 "state": "enabled", 00:14:48.487 "thread": "nvmf_tgt_poll_group_000", 00:14:48.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:48.487 "listen_address": { 00:14:48.487 "trtype": "TCP", 00:14:48.487 "adrfam": "IPv4", 00:14:48.487 "traddr": "10.0.0.3", 00:14:48.487 "trsvcid": "4420" 00:14:48.487 }, 00:14:48.487 "peer_address": { 00:14:48.487 "trtype": "TCP", 00:14:48.487 "adrfam": "IPv4", 00:14:48.487 "traddr": "10.0.0.1", 00:14:48.487 "trsvcid": "56924" 00:14:48.487 }, 00:14:48.487 "auth": { 00:14:48.487 "state": "completed", 00:14:48.487 "digest": "sha512", 00:14:48.487 "dhgroup": "ffdhe6144" 00:14:48.487 } 00:14:48.487 } 00:14:48.487 ]' 00:14:48.487 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.746 11:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.004 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:49.005 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:49.942 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.942 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.511 00:14:50.511 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.511 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.511 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.770 { 00:14:50.770 "cntlid": 133, 00:14:50.770 "qid": 0, 00:14:50.770 "state": "enabled", 00:14:50.770 "thread": "nvmf_tgt_poll_group_000", 00:14:50.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:50.770 "listen_address": { 00:14:50.770 "trtype": "TCP", 00:14:50.770 "adrfam": "IPv4", 00:14:50.770 "traddr": "10.0.0.3", 00:14:50.770 "trsvcid": "4420" 00:14:50.770 }, 00:14:50.770 "peer_address": { 00:14:50.770 "trtype": "TCP", 00:14:50.770 "adrfam": "IPv4", 00:14:50.770 "traddr": "10.0.0.1", 00:14:50.770 "trsvcid": "56950" 00:14:50.770 }, 00:14:50.770 "auth": { 00:14:50.770 "state": "completed", 00:14:50.770 "digest": "sha512", 00:14:50.770 "dhgroup": "ffdhe6144" 00:14:50.770 } 00:14:50.770 } 00:14:50.770 ]' 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.770 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.029 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.029 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.029 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.288 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:51.289 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:51.857 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.116 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.117 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.117 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.684 00:14:52.684 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.684 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.684 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.943 { 00:14:52.943 "cntlid": 135, 00:14:52.943 "qid": 0, 00:14:52.943 "state": "enabled", 00:14:52.943 "thread": "nvmf_tgt_poll_group_000", 00:14:52.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:52.943 "listen_address": { 00:14:52.943 "trtype": "TCP", 00:14:52.943 "adrfam": "IPv4", 00:14:52.943 "traddr": "10.0.0.3", 00:14:52.943 "trsvcid": "4420" 00:14:52.943 }, 00:14:52.943 "peer_address": { 00:14:52.943 "trtype": "TCP", 00:14:52.943 "adrfam": "IPv4", 00:14:52.943 "traddr": "10.0.0.1", 00:14:52.943 "trsvcid": "37260" 00:14:52.943 }, 00:14:52.943 "auth": { 00:14:52.943 "state": "completed", 00:14:52.943 "digest": "sha512", 00:14:52.943 "dhgroup": "ffdhe6144" 00:14:52.943 } 00:14:52.943 } 00:14:52.943 ]' 00:14:52.943 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.943 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.943 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.943 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.943 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.203 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.203 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.203 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.462 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:53.462 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:54.035 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.295 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.865 00:14:54.865 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.865 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.865 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.430 { 00:14:55.430 "cntlid": 137, 00:14:55.430 "qid": 0, 00:14:55.430 "state": "enabled", 00:14:55.430 "thread": "nvmf_tgt_poll_group_000", 00:14:55.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:55.430 "listen_address": { 00:14:55.430 "trtype": "TCP", 00:14:55.430 "adrfam": "IPv4", 00:14:55.430 "traddr": "10.0.0.3", 00:14:55.430 "trsvcid": "4420" 00:14:55.430 }, 00:14:55.430 "peer_address": { 00:14:55.430 "trtype": "TCP", 00:14:55.430 "adrfam": "IPv4", 00:14:55.430 "traddr": "10.0.0.1", 00:14:55.430 "trsvcid": "37278" 00:14:55.430 }, 00:14:55.430 "auth": { 00:14:55.430 "state": "completed", 00:14:55.430 "digest": "sha512", 00:14:55.430 "dhgroup": "ffdhe8192" 00:14:55.430 } 00:14:55.430 } 00:14:55.430 ]' 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.430 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.689 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:55.689 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:14:56.627 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.627 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:56.627 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.628 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.628 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.628 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.628 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:56.628 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.887 11:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.456 00:14:57.456 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.456 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.456 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.715 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.715 { 00:14:57.715 "cntlid": 139, 00:14:57.715 "qid": 0, 00:14:57.716 "state": "enabled", 00:14:57.716 "thread": "nvmf_tgt_poll_group_000", 00:14:57.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:14:57.716 "listen_address": { 00:14:57.716 "trtype": "TCP", 00:14:57.716 "adrfam": "IPv4", 00:14:57.716 "traddr": "10.0.0.3", 00:14:57.716 "trsvcid": "4420" 00:14:57.716 }, 00:14:57.716 "peer_address": { 00:14:57.716 "trtype": "TCP", 00:14:57.716 "adrfam": "IPv4", 00:14:57.716 "traddr": "10.0.0.1", 00:14:57.716 "trsvcid": "37306" 00:14:57.716 }, 00:14:57.716 "auth": { 00:14:57.716 "state": "completed", 00:14:57.716 "digest": "sha512", 00:14:57.716 "dhgroup": "ffdhe8192" 00:14:57.716 } 00:14:57.716 } 00:14:57.716 ]' 00:14:57.716 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.716 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.716 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.974 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.974 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.974 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.974 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.974 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.233 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:58.233 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: --dhchap-ctrl-secret DHHC-1:02:MDAzYzhlYTVlNDRhZjJhOGY4YjFjMzdjMjM2ZTBjYjI4NzUyNWU1YzQ3ZmQ3MDM2ej39yA==: 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.800 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.059 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.625 00:14:59.625 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.625 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.625 11:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.193 { 00:15:00.193 "cntlid": 141, 00:15:00.193 "qid": 0, 00:15:00.193 "state": "enabled", 00:15:00.193 "thread": "nvmf_tgt_poll_group_000", 00:15:00.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:00.193 "listen_address": { 00:15:00.193 "trtype": "TCP", 00:15:00.193 "adrfam": "IPv4", 00:15:00.193 "traddr": "10.0.0.3", 00:15:00.193 "trsvcid": "4420" 00:15:00.193 }, 00:15:00.193 "peer_address": { 00:15:00.193 "trtype": "TCP", 00:15:00.193 "adrfam": "IPv4", 00:15:00.193 "traddr": "10.0.0.1", 00:15:00.193 "trsvcid": "37348" 00:15:00.193 }, 00:15:00.193 "auth": { 00:15:00.193 "state": "completed", 00:15:00.193 "digest": "sha512", 00:15:00.193 "dhgroup": "ffdhe8192" 00:15:00.193 } 00:15:00.193 } 00:15:00.193 ]' 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.193 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.452 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:15:00.452 11:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:01:Y2Q0NDk1OGM5NThhODQ5OWRiN2MzYjhlYTczN2QxMGZ97e95: 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:01.388 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.659 11:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.238 00:15:02.238 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.238 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.238 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.498 { 00:15:02.498 "cntlid": 143, 00:15:02.498 "qid": 0, 00:15:02.498 "state": "enabled", 00:15:02.498 "thread": "nvmf_tgt_poll_group_000", 00:15:02.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:02.498 "listen_address": { 00:15:02.498 "trtype": "TCP", 00:15:02.498 "adrfam": "IPv4", 00:15:02.498 "traddr": "10.0.0.3", 00:15:02.498 "trsvcid": "4420" 00:15:02.498 }, 00:15:02.498 "peer_address": { 00:15:02.498 "trtype": "TCP", 00:15:02.498 "adrfam": "IPv4", 00:15:02.498 "traddr": "10.0.0.1", 00:15:02.498 "trsvcid": "60062" 00:15:02.498 }, 00:15:02.498 "auth": { 00:15:02.498 "state": "completed", 00:15:02.498 "digest": "sha512", 00:15:02.498 "dhgroup": "ffdhe8192" 00:15:02.498 } 00:15:02.498 } 00:15:02.498 ]' 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.498 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.757 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.757 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.757 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.016 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:03.016 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.584 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.843 11:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.411 00:15:04.411 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.411 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.411 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.980 { 00:15:04.980 "cntlid": 145, 00:15:04.980 "qid": 0, 00:15:04.980 "state": "enabled", 00:15:04.980 "thread": "nvmf_tgt_poll_group_000", 00:15:04.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:04.980 "listen_address": { 00:15:04.980 "trtype": "TCP", 00:15:04.980 "adrfam": "IPv4", 00:15:04.980 "traddr": "10.0.0.3", 00:15:04.980 "trsvcid": "4420" 00:15:04.980 }, 00:15:04.980 "peer_address": { 00:15:04.980 "trtype": "TCP", 00:15:04.980 "adrfam": "IPv4", 00:15:04.980 "traddr": "10.0.0.1", 00:15:04.980 "trsvcid": "60072" 00:15:04.980 }, 00:15:04.980 "auth": { 00:15:04.980 "state": "completed", 00:15:04.980 "digest": "sha512", 00:15:04.980 "dhgroup": "ffdhe8192" 00:15:04.980 } 00:15:04.980 } 00:15:04.980 ]' 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.980 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.980 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.238 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:15:05.238 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:00:MGUzOTZhM2U3Y2FkNmNjYjU1YmU5MWQ0ZjQzZWRiM2M3MWY2YzcwY2M3MzMwNjdiPeXDPw==: --dhchap-ctrl-secret DHHC-1:03:NGM0MGM1ZGU2ODA1ZWRjYjdlYzA4YmRiMTk1MTBmMWRmYmY3Y2VhMGYxMzJlOWZhMzM5ZDdmNTM1NzkzYTA3Ze5j/1s=: 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:06.174 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:06.740 request: 00:15:06.740 { 00:15:06.740 "name": "nvme0", 00:15:06.740 "trtype": "tcp", 00:15:06.740 "traddr": "10.0.0.3", 00:15:06.740 "adrfam": "ipv4", 00:15:06.740 "trsvcid": "4420", 00:15:06.740 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:06.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:06.740 "prchk_reftag": false, 00:15:06.740 "prchk_guard": false, 00:15:06.740 "hdgst": false, 00:15:06.740 "ddgst": false, 00:15:06.740 "dhchap_key": "key2", 00:15:06.740 "allow_unrecognized_csi": false, 00:15:06.740 "method": "bdev_nvme_attach_controller", 00:15:06.740 "req_id": 1 00:15:06.740 } 00:15:06.740 Got JSON-RPC error response 00:15:06.740 response: 00:15:06.740 { 00:15:06.740 "code": -5, 00:15:06.740 "message": "Input/output error" 00:15:06.740 } 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:06.740 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:07.305 request: 00:15:07.305 { 00:15:07.305 "name": "nvme0", 00:15:07.305 "trtype": "tcp", 00:15:07.305 "traddr": "10.0.0.3", 00:15:07.305 "adrfam": "ipv4", 00:15:07.305 "trsvcid": "4420", 00:15:07.305 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:07.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:07.305 "prchk_reftag": false, 00:15:07.305 "prchk_guard": false, 00:15:07.305 "hdgst": false, 00:15:07.305 "ddgst": false, 00:15:07.305 "dhchap_key": "key1", 00:15:07.305 "dhchap_ctrlr_key": "ckey2", 00:15:07.305 "allow_unrecognized_csi": false, 00:15:07.305 "method": "bdev_nvme_attach_controller", 00:15:07.305 "req_id": 1 00:15:07.305 } 00:15:07.305 Got JSON-RPC error response 00:15:07.305 response: 00:15:07.305 { 00:15:07.305 "code": -5, 00:15:07.305 "message": "Input/output error" 00:15:07.305 } 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.305 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.242 request: 00:15:08.242 { 00:15:08.242 "name": "nvme0", 00:15:08.242 "trtype": "tcp", 00:15:08.242 "traddr": "10.0.0.3", 00:15:08.242 "adrfam": "ipv4", 00:15:08.242 "trsvcid": "4420", 00:15:08.242 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:08.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:08.242 "prchk_reftag": false, 00:15:08.242 "prchk_guard": false, 00:15:08.242 "hdgst": false, 00:15:08.242 "ddgst": false, 00:15:08.242 "dhchap_key": "key1", 00:15:08.242 "dhchap_ctrlr_key": "ckey1", 00:15:08.242 "allow_unrecognized_csi": false, 00:15:08.242 "method": "bdev_nvme_attach_controller", 00:15:08.242 "req_id": 1 00:15:08.242 } 00:15:08.242 Got JSON-RPC error response 00:15:08.242 response: 00:15:08.242 { 00:15:08.242 "code": -5, 00:15:08.242 "message": "Input/output error" 00:15:08.242 } 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 81346 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81346 ']' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81346 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81346 00:15:08.242 killing process with pid 81346 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81346' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81346 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81346 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=84457 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 84457 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84457 ']' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.242 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 84457 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84457 ']' 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.811 11:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.070 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.070 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:09.070 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:09.070 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.070 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.070 null0 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fbr 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.lkB ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lkB 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tXx 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.wn7 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wn7 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wJK 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Aw7 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Aw7 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cKe 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.333 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.271 nvme0n1 00:15:10.271 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.271 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.271 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.529 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.529 { 00:15:10.529 "cntlid": 1, 00:15:10.529 "qid": 0, 00:15:10.529 "state": "enabled", 00:15:10.529 "thread": "nvmf_tgt_poll_group_000", 00:15:10.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:10.529 "listen_address": { 00:15:10.529 "trtype": "TCP", 00:15:10.530 "adrfam": "IPv4", 00:15:10.530 "traddr": "10.0.0.3", 00:15:10.530 "trsvcid": "4420" 00:15:10.530 }, 00:15:10.530 "peer_address": { 00:15:10.530 "trtype": "TCP", 00:15:10.530 "adrfam": "IPv4", 00:15:10.530 "traddr": "10.0.0.1", 00:15:10.530 "trsvcid": "60126" 00:15:10.530 }, 00:15:10.530 "auth": { 00:15:10.530 "state": "completed", 00:15:10.530 "digest": "sha512", 00:15:10.530 "dhgroup": "ffdhe8192" 00:15:10.530 } 00:15:10.530 } 00:15:10.530 ]' 00:15:10.530 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.530 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.530 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.788 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.788 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.788 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.788 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.788 11:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.058 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:11.058 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:11.625 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key3 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:11.883 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.143 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.402 request: 00:15:12.402 { 00:15:12.402 "name": "nvme0", 00:15:12.402 "trtype": "tcp", 00:15:12.402 "traddr": "10.0.0.3", 00:15:12.402 "adrfam": "ipv4", 00:15:12.402 "trsvcid": "4420", 00:15:12.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:12.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:12.402 "prchk_reftag": false, 00:15:12.402 "prchk_guard": false, 00:15:12.402 "hdgst": false, 00:15:12.402 "ddgst": false, 00:15:12.402 "dhchap_key": "key3", 00:15:12.402 "allow_unrecognized_csi": false, 00:15:12.402 "method": "bdev_nvme_attach_controller", 00:15:12.402 "req_id": 1 00:15:12.402 } 00:15:12.402 Got JSON-RPC error response 00:15:12.402 response: 00:15:12.402 { 00:15:12.402 "code": -5, 00:15:12.402 "message": "Input/output error" 00:15:12.402 } 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:12.402 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.661 11:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.920 request: 00:15:12.920 { 00:15:12.920 "name": "nvme0", 00:15:12.920 "trtype": "tcp", 00:15:12.920 "traddr": "10.0.0.3", 00:15:12.920 "adrfam": "ipv4", 00:15:12.920 "trsvcid": "4420", 00:15:12.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:12.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:12.920 "prchk_reftag": false, 00:15:12.920 "prchk_guard": false, 00:15:12.920 "hdgst": false, 00:15:12.920 "ddgst": false, 00:15:12.920 "dhchap_key": "key3", 00:15:12.920 "allow_unrecognized_csi": false, 00:15:12.920 "method": "bdev_nvme_attach_controller", 00:15:12.920 "req_id": 1 00:15:12.920 } 00:15:12.920 Got JSON-RPC error response 00:15:12.920 response: 00:15:12.920 { 00:15:12.920 "code": -5, 00:15:12.920 "message": "Input/output error" 00:15:12.920 } 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:12.920 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:13.489 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:13.748 request: 00:15:13.748 { 00:15:13.748 "name": "nvme0", 00:15:13.748 "trtype": "tcp", 00:15:13.748 "traddr": "10.0.0.3", 00:15:13.748 "adrfam": "ipv4", 00:15:13.748 "trsvcid": "4420", 00:15:13.748 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:13.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:13.748 "prchk_reftag": false, 00:15:13.748 "prchk_guard": false, 00:15:13.748 "hdgst": false, 00:15:13.748 "ddgst": false, 00:15:13.748 "dhchap_key": "key0", 00:15:13.748 "dhchap_ctrlr_key": "key1", 00:15:13.748 "allow_unrecognized_csi": false, 00:15:13.748 "method": "bdev_nvme_attach_controller", 00:15:13.748 "req_id": 1 00:15:13.748 } 00:15:13.748 Got JSON-RPC error response 00:15:13.748 response: 00:15:13.748 { 00:15:13.748 "code": -5, 00:15:13.748 "message": "Input/output error" 00:15:13.748 } 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:13.748 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:14.007 nvme0n1 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.266 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:14.834 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:15.788 nvme0n1 00:15:15.788 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:15.788 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.788 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:16.047 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:16.048 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.306 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.306 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:16.306 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid f820f793-c892-4aa4-a8a4-5ed3fda41d6c -l 0 --dhchap-secret DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: --dhchap-ctrl-secret DHHC-1:03:ODlhMjcwNTk5OTE2MzM1NzA0MzY3MGI2Mzc1MzI1N2EyODZlZDZjYTJmODllODRhNTE3MzIzYTk3MDU4ODhhNgTRqT0=: 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.874 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:17.133 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:17.700 request: 00:15:17.700 { 00:15:17.700 "name": "nvme0", 00:15:17.700 "trtype": "tcp", 00:15:17.700 "traddr": "10.0.0.3", 00:15:17.700 "adrfam": "ipv4", 00:15:17.700 "trsvcid": "4420", 00:15:17.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:17.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c", 00:15:17.700 "prchk_reftag": false, 00:15:17.700 "prchk_guard": false, 00:15:17.700 "hdgst": false, 00:15:17.700 "ddgst": false, 00:15:17.700 "dhchap_key": "key1", 00:15:17.700 "allow_unrecognized_csi": false, 00:15:17.700 "method": "bdev_nvme_attach_controller", 00:15:17.700 "req_id": 1 00:15:17.700 } 00:15:17.700 Got JSON-RPC error response 00:15:17.700 response: 00:15:17.700 { 00:15:17.700 "code": -5, 00:15:17.700 "message": "Input/output error" 00:15:17.700 } 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.700 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:18.636 nvme0n1 00:15:18.636 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:18.636 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:18.636 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.895 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.895 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.895 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:19.154 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:19.723 nvme0n1 00:15:19.723 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:19.723 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.723 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:19.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: '' 2s 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: ]] 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGMwMWVjMzNiYTU1NmZmODEzYmM3YjMzYWExMWQzMGYhYOlP: 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:20.241 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:22.145 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: 2s 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: ]] 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDc0MDI5OTk2MzZlOGRhODljM2I0OGVhMDEwOTIwNjNjY2VkZmYxODI0YjViNDBi899zgg==: 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:22.146 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:24.710 11:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:25.277 nvme0n1 00:15:25.277 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.278 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.278 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.278 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.278 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.278 11:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.281 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.539 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.539 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:26.539 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:26.798 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:26.798 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:26.798 11:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.058 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.626 request: 00:15:27.626 { 00:15:27.626 "name": "nvme0", 00:15:27.626 "dhchap_key": "key1", 00:15:27.626 "dhchap_ctrlr_key": "key3", 00:15:27.626 "method": "bdev_nvme_set_keys", 00:15:27.626 "req_id": 1 00:15:27.626 } 00:15:27.626 Got JSON-RPC error response 00:15:27.626 response: 00:15:27.626 { 00:15:27.626 "code": -13, 00:15:27.626 "message": "Permission denied" 00:15:27.626 } 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:27.626 11:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.194 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:28.194 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:29.131 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:29.131 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:29.131 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:29.390 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:30.328 nvme0n1 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.328 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.896 request: 00:15:30.896 { 00:15:30.896 "name": "nvme0", 00:15:30.896 "dhchap_key": "key2", 00:15:30.896 "dhchap_ctrlr_key": "key0", 00:15:30.896 "method": "bdev_nvme_set_keys", 00:15:30.896 "req_id": 1 00:15:30.896 } 00:15:30.896 Got JSON-RPC error response 00:15:30.896 response: 00:15:30.896 { 00:15:30.896 "code": -13, 00:15:30.896 "message": "Permission denied" 00:15:30.896 } 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:30.896 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.154 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:31.154 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81378 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81378 ']' 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81378 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81378 00:15:32.532 killing process with pid 81378 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81378' 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81378 00:15:32.532 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81378 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.101 rmmod nvme_tcp 00:15:33.101 rmmod nvme_fabrics 00:15:33.101 rmmod nvme_keyring 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 84457 ']' 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 84457 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 84457 ']' 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 84457 00:15:33.101 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:33.360 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.360 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84457 00:15:33.361 killing process with pid 84457 00:15:33.361 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.361 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.361 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84457' 00:15:33.361 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 84457 00:15:33.361 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 84457 00:15:33.619 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:33.619 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.620 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fbr /tmp/spdk.key-sha256.tXx /tmp/spdk.key-sha384.wJK /tmp/spdk.key-sha512.cKe /tmp/spdk.key-sha512.lkB /tmp/spdk.key-sha384.wn7 /tmp/spdk.key-sha256.Aw7 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:33.879 ************************************ 00:15:33.879 END TEST nvmf_auth_target 00:15:33.879 ************************************ 00:15:33.879 00:15:33.879 real 3m15.372s 00:15:33.879 user 7m46.263s 00:15:33.879 sys 0m31.543s 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 ************************************ 00:15:33.879 START TEST nvmf_bdevio_no_huge 00:15:33.879 ************************************ 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:33.879 * Looking for test storage... 00:15:33.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.879 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.139 --rc genhtml_branch_coverage=1 00:15:34.139 --rc genhtml_function_coverage=1 00:15:34.139 --rc genhtml_legend=1 00:15:34.139 --rc geninfo_all_blocks=1 00:15:34.139 --rc geninfo_unexecuted_blocks=1 00:15:34.139 00:15:34.139 ' 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.139 --rc genhtml_branch_coverage=1 00:15:34.139 --rc genhtml_function_coverage=1 00:15:34.139 --rc genhtml_legend=1 00:15:34.139 --rc geninfo_all_blocks=1 00:15:34.139 --rc geninfo_unexecuted_blocks=1 00:15:34.139 00:15:34.139 ' 00:15:34.139 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.140 --rc genhtml_branch_coverage=1 00:15:34.140 --rc genhtml_function_coverage=1 00:15:34.140 --rc genhtml_legend=1 00:15:34.140 --rc geninfo_all_blocks=1 00:15:34.140 --rc geninfo_unexecuted_blocks=1 00:15:34.140 00:15:34.140 ' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.140 --rc genhtml_branch_coverage=1 00:15:34.140 --rc genhtml_function_coverage=1 00:15:34.140 --rc genhtml_legend=1 00:15:34.140 --rc geninfo_all_blocks=1 00:15:34.140 --rc geninfo_unexecuted_blocks=1 00:15:34.140 00:15:34.140 ' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:34.140 Cannot find device "nvmf_init_br" 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:34.140 Cannot find device "nvmf_init_br2" 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:34.140 Cannot find device "nvmf_tgt_br" 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.140 Cannot find device "nvmf_tgt_br2" 00:15:34.140 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.141 Cannot find device "nvmf_init_br" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.141 Cannot find device "nvmf_init_br2" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.141 Cannot find device "nvmf_tgt_br" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.141 Cannot find device "nvmf_tgt_br2" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.141 Cannot find device "nvmf_br" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.141 Cannot find device "nvmf_init_if" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.141 Cannot find device "nvmf_init_if2" 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.141 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:34.401 00:15:34.401 --- 10.0.0.3 ping statistics --- 00:15:34.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.401 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.401 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.401 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:15:34.401 00:15:34.401 --- 10.0.0.4 ping statistics --- 00:15:34.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.401 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:34.401 00:15:34.401 --- 10.0.0.1 ping statistics --- 00:15:34.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.401 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:34.401 00:15:34.401 --- 10.0.0.2 ping statistics --- 00:15:34.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.401 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=85104 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 85104 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 85104 ']' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.401 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.401 [2024-11-28 11:46:04.522333] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:34.401 [2024-11-28 11:46:04.523333] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:34.660 [2024-11-28 11:46:04.673231] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:34.660 [2024-11-28 11:46:04.692565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.660 [2024-11-28 11:46:04.742156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.661 [2024-11-28 11:46:04.742213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.661 [2024-11-28 11:46:04.742241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.661 [2024-11-28 11:46:04.742249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.661 [2024-11-28 11:46:04.742256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.661 [2024-11-28 11:46:04.743137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:34.661 [2024-11-28 11:46:04.743620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:34.661 [2024-11-28 11:46:04.743786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:34.661 [2024-11-28 11:46:04.743791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.661 [2024-11-28 11:46:04.749580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 [2024-11-28 11:46:04.925639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 Malloc0 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.920 [2024-11-28 11:46:04.967848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.920 { 00:15:34.920 "params": { 00:15:34.920 "name": "Nvme$subsystem", 00:15:34.920 "trtype": "$TEST_TRANSPORT", 00:15:34.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.920 "adrfam": "ipv4", 00:15:34.920 "trsvcid": "$NVMF_PORT", 00:15:34.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.920 "hdgst": ${hdgst:-false}, 00:15:34.920 "ddgst": ${ddgst:-false} 00:15:34.920 }, 00:15:34.920 "method": "bdev_nvme_attach_controller" 00:15:34.920 } 00:15:34.920 EOF 00:15:34.920 )") 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:34.920 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:34.920 "params": { 00:15:34.920 "name": "Nvme1", 00:15:34.920 "trtype": "tcp", 00:15:34.920 "traddr": "10.0.0.3", 00:15:34.921 "adrfam": "ipv4", 00:15:34.921 "trsvcid": "4420", 00:15:34.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.921 "hdgst": false, 00:15:34.921 "ddgst": false 00:15:34.921 }, 00:15:34.921 "method": "bdev_nvme_attach_controller" 00:15:34.921 }' 00:15:34.921 [2024-11-28 11:46:05.022209] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:34.921 [2024-11-28 11:46:05.022286] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid85132 ] 00:15:35.204 [2024-11-28 11:46:05.160726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:35.204 [2024-11-28 11:46:05.177698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.204 [2024-11-28 11:46:05.261251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.204 [2024-11-28 11:46:05.261393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.204 [2024-11-28 11:46:05.261372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.204 [2024-11-28 11:46:05.277730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.461 I/O targets: 00:15:35.461 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:35.461 00:15:35.461 00:15:35.461 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.461 http://cunit.sourceforge.net/ 00:15:35.461 00:15:35.461 00:15:35.461 Suite: bdevio tests on: Nvme1n1 00:15:35.461 Test: blockdev write read block ...passed 00:15:35.461 Test: blockdev write zeroes read block ...passed 00:15:35.461 Test: blockdev write zeroes read no split ...passed 00:15:35.461 Test: blockdev write zeroes read split ...passed 00:15:35.461 Test: blockdev write zeroes read split partial ...passed 00:15:35.461 Test: blockdev reset ...[2024-11-28 11:46:05.554919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:35.461 [2024-11-28 11:46:05.555381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c19e60 (9): Bad file descriptor 00:15:35.461 [2024-11-28 11:46:05.568035] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:35.461 passed 00:15:35.461 Test: blockdev write read 8 blocks ...passed 00:15:35.461 Test: blockdev write read size > 128k ...passed 00:15:35.461 Test: blockdev write read invalid size ...passed 00:15:35.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.461 Test: blockdev write read max offset ...passed 00:15:35.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.461 Test: blockdev writev readv 8 blocks ...passed 00:15:35.461 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.461 Test: blockdev writev readv block ...passed 00:15:35.461 Test: blockdev writev readv size > 128k ...passed 00:15:35.461 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.461 Test: blockdev comparev and writev ...[2024-11-28 11:46:05.577166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.577411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.577843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.577865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.577887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.577901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.578208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.578236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.578259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.578272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.578708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.578746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.578770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.461 [2024-11-28 11:46:05.578783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:35.461 passed 00:15:35.461 Test: blockdev nvme passthru rw ...passed 00:15:35.461 Test: blockdev nvme passthru vendor specific ...[2024-11-28 11:46:05.579642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.461 [2024-11-28 11:46:05.579680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.579823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.461 [2024-11-28 11:46:05.579844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.579971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.461 [2024-11-28 11:46:05.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:35.461 [2024-11-28 11:46:05.580111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.461 [2024-11-28 11:46:05.580137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:35.461 passed 00:15:35.719 Test: blockdev nvme admin passthru ...passed 00:15:35.719 Test: blockdev copy ...passed 00:15:35.719 00:15:35.719 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.719 suites 1 1 n/a 0 0 00:15:35.719 tests 23 23 23 0 0 00:15:35.719 asserts 152 152 152 0 n/a 00:15:35.719 00:15:35.719 Elapsed time = 0.176 seconds 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.978 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.978 rmmod nvme_tcp 00:15:35.978 rmmod nvme_fabrics 00:15:35.978 rmmod nvme_keyring 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 85104 ']' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 85104 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 85104 ']' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 85104 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85104 00:15:35.978 killing process with pid 85104 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85104' 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 85104 00:15:35.978 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 85104 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:36.546 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:36.547 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:36.806 00:15:36.806 real 0m2.907s 00:15:36.806 user 0m8.283s 00:15:36.806 sys 0m1.469s 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:36.806 ************************************ 00:15:36.806 END TEST nvmf_bdevio_no_huge 00:15:36.806 ************************************ 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.806 ************************************ 00:15:36.806 START TEST nvmf_tls 00:15:36.806 ************************************ 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:36.806 * Looking for test storage... 00:15:36.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.806 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.066 --rc genhtml_branch_coverage=1 00:15:37.066 --rc genhtml_function_coverage=1 00:15:37.066 --rc genhtml_legend=1 00:15:37.066 --rc geninfo_all_blocks=1 00:15:37.066 --rc geninfo_unexecuted_blocks=1 00:15:37.066 00:15:37.066 ' 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.066 --rc genhtml_branch_coverage=1 00:15:37.066 --rc genhtml_function_coverage=1 00:15:37.066 --rc genhtml_legend=1 00:15:37.066 --rc geninfo_all_blocks=1 00:15:37.066 --rc geninfo_unexecuted_blocks=1 00:15:37.066 00:15:37.066 ' 00:15:37.066 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.066 --rc genhtml_branch_coverage=1 00:15:37.066 --rc genhtml_function_coverage=1 00:15:37.066 --rc genhtml_legend=1 00:15:37.067 --rc geninfo_all_blocks=1 00:15:37.067 --rc geninfo_unexecuted_blocks=1 00:15:37.067 00:15:37.067 ' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.067 --rc genhtml_branch_coverage=1 00:15:37.067 --rc genhtml_function_coverage=1 00:15:37.067 --rc genhtml_legend=1 00:15:37.067 --rc geninfo_all_blocks=1 00:15:37.067 --rc geninfo_unexecuted_blocks=1 00:15:37.067 00:15:37.067 ' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.067 11:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.067 Cannot find device "nvmf_init_br" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.067 Cannot find device "nvmf_init_br2" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.067 Cannot find device "nvmf_tgt_br" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.067 Cannot find device "nvmf_tgt_br2" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:37.067 Cannot find device "nvmf_init_br" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:37.067 Cannot find device "nvmf_init_br2" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:37.067 Cannot find device "nvmf_tgt_br" 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:37.067 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:37.068 Cannot find device "nvmf_tgt_br2" 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:37.068 Cannot find device "nvmf_br" 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:37.068 Cannot find device "nvmf_init_if" 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:37.068 Cannot find device "nvmf_init_if2" 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.068 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:37.327 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:37.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:37.328 00:15:37.328 --- 10.0.0.3 ping statistics --- 00:15:37.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.328 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:37.328 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:37.328 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:15:37.328 00:15:37.328 --- 10.0.0.4 ping statistics --- 00:15:37.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.328 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:37.328 00:15:37.328 --- 10.0.0.1 ping statistics --- 00:15:37.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.328 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:37.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:37.328 00:15:37.328 --- 10.0.0.2 ping statistics --- 00:15:37.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.328 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.328 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85363 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85363 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85363 ']' 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.587 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.587 [2024-11-28 11:46:07.522413] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:37.587 [2024-11-28 11:46:07.522559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.587 [2024-11-28 11:46:07.654539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:37.587 [2024-11-28 11:46:07.676139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.846 [2024-11-28 11:46:07.729956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.846 [2024-11-28 11:46:07.730024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.846 [2024-11-28 11:46:07.730035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.846 [2024-11-28 11:46:07.730042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.846 [2024-11-28 11:46:07.730049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.846 [2024-11-28 11:46:07.730503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.413 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.413 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:38.413 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.413 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.413 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.671 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.671 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:38.671 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:38.671 true 00:15:38.671 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.671 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:39.238 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:39.238 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:39.238 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:39.238 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.496 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:39.496 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:39.496 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:39.496 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:39.755 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.755 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:40.014 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:40.014 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:40.014 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.014 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:40.272 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:40.272 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:40.272 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:40.531 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:40.531 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.789 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:40.789 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:40.789 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:41.047 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:41.047 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:41.305 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:41.306 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.l1qYNNYfkl 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.jUy0pOPXYV 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.l1qYNNYfkl 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.jUy0pOPXYV 00:15:41.564 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:41.823 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:42.082 [2024-11-28 11:46:11.992842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.082 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.l1qYNNYfkl 00:15:42.082 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.l1qYNNYfkl 00:15:42.082 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:42.340 [2024-11-28 11:46:12.271483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.340 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:42.599 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:42.857 [2024-11-28 11:46:12.739623] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.857 [2024-11-28 11:46:12.739907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.857 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:43.116 malloc0 00:15:43.116 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:43.375 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.l1qYNNYfkl 00:15:43.633 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:43.633 11:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.l1qYNNYfkl 00:15:55.845 Initializing NVMe Controllers 00:15:55.845 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.845 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.845 Initialization complete. Launching workers. 00:15:55.845 ======================================================== 00:15:55.845 Latency(us) 00:15:55.845 Device Information : IOPS MiB/s Average min max 00:15:55.845 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8991.60 35.12 7119.96 1443.35 9258.02 00:15:55.845 ======================================================== 00:15:55.845 Total : 8991.60 35.12 7119.96 1443.35 9258.02 00:15:55.845 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l1qYNNYfkl 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l1qYNNYfkl 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85609 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85609 /var/tmp/bdevperf.sock 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85609 ']' 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.845 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.845 [2024-11-28 11:46:24.013208] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:55.845 [2024-11-28 11:46:24.013347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85609 ] 00:15:55.845 [2024-11-28 11:46:24.141004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:55.845 [2024-11-28 11:46:24.173471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.845 [2024-11-28 11:46:24.227182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.845 [2024-11-28 11:46:24.287286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:55.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l1qYNNYfkl 00:15:55.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:55.845 [2024-11-28 11:46:24.897611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:55.845 TLSTESTn1 00:15:55.845 11:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:55.845 Running I/O for 10 seconds... 00:15:57.049 3006.00 IOPS, 11.74 MiB/s [2024-11-28T11:46:28.112Z] 3038.50 IOPS, 11.87 MiB/s [2024-11-28T11:46:29.491Z] 3053.67 IOPS, 11.93 MiB/s [2024-11-28T11:46:30.429Z] 3194.50 IOPS, 12.48 MiB/s [2024-11-28T11:46:31.367Z] 3260.00 IOPS, 12.73 MiB/s [2024-11-28T11:46:32.304Z] 3322.00 IOPS, 12.98 MiB/s [2024-11-28T11:46:33.241Z] 3346.71 IOPS, 13.07 MiB/s [2024-11-28T11:46:34.179Z] 3360.25 IOPS, 13.13 MiB/s [2024-11-28T11:46:35.117Z] 3374.11 IOPS, 13.18 MiB/s [2024-11-28T11:46:35.376Z] 3385.80 IOPS, 13.23 MiB/s 00:16:05.250 Latency(us) 00:16:05.250 [2024-11-28T11:46:35.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.250 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:05.250 Verification LBA range: start 0x0 length 0x2000 00:16:05.250 TLSTESTn1 : 10.02 3389.92 13.24 0.00 0.00 37682.09 9472.93 28716.68 00:16:05.250 [2024-11-28T11:46:35.376Z] =================================================================================================================== 00:16:05.250 [2024-11-28T11:46:35.376Z] Total : 3389.92 13.24 0.00 0.00 37682.09 9472.93 28716.68 00:16:05.250 { 00:16:05.250 "results": [ 00:16:05.250 { 00:16:05.250 "job": "TLSTESTn1", 00:16:05.250 "core_mask": "0x4", 00:16:05.250 "workload": "verify", 00:16:05.250 "status": "finished", 00:16:05.250 "verify_range": { 00:16:05.250 "start": 0, 00:16:05.250 "length": 8192 00:16:05.250 }, 00:16:05.250 "queue_depth": 128, 00:16:05.250 "io_size": 4096, 00:16:05.250 "runtime": 10.024719, 00:16:05.250 "iops": 3389.9204556257387, 00:16:05.250 "mibps": 13.241876779788042, 00:16:05.250 "io_failed": 0, 00:16:05.250 "io_timeout": 0, 00:16:05.250 "avg_latency_us": 37682.09286932236, 00:16:05.250 "min_latency_us": 9472.930909090908, 00:16:05.250 "max_latency_us": 28716.683636363636 00:16:05.250 } 00:16:05.250 ], 00:16:05.250 "core_count": 1 00:16:05.250 } 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85609 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85609 ']' 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85609 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85609 00:16:05.250 killing process with pid 85609 00:16:05.250 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.250 00:16:05.250 Latency(us) 00:16:05.250 [2024-11-28T11:46:35.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.250 [2024-11-28T11:46:35.376Z] =================================================================================================================== 00:16:05.250 [2024-11-28T11:46:35.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85609' 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85609 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85609 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUy0pOPXYV 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUy0pOPXYV 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jUy0pOPXYV 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jUy0pOPXYV 00:16:05.250 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.508 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85736 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85736 /var/tmp/bdevperf.sock 00:16:05.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85736 ']' 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.509 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.509 [2024-11-28 11:46:35.434148] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:05.509 [2024-11-28 11:46:35.434262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85736 ] 00:16:05.509 [2024-11-28 11:46:35.561571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:05.509 [2024-11-28 11:46:35.587496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.509 [2024-11-28 11:46:35.632991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.767 [2024-11-28 11:46:35.688280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.767 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.767 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:05.767 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jUy0pOPXYV 00:16:06.073 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.331 [2024-11-28 11:46:36.350699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.331 [2024-11-28 11:46:36.357400] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:06.331 [2024-11-28 11:46:36.357623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6700 (107): Transport endpoint is not connected 00:16:06.331 [2024-11-28 11:46:36.358606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6700 (9): Bad file descriptor 00:16:06.331 [2024-11-28 11:46:36.359604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:06.331 [2024-11-28 11:46:36.359631] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:06.331 [2024-11-28 11:46:36.359645] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:06.331 [2024-11-28 11:46:36.359665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:06.331 request: 00:16:06.331 { 00:16:06.331 "name": "TLSTEST", 00:16:06.331 "trtype": "tcp", 00:16:06.331 "traddr": "10.0.0.3", 00:16:06.331 "adrfam": "ipv4", 00:16:06.331 "trsvcid": "4420", 00:16:06.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:06.331 "prchk_reftag": false, 00:16:06.331 "prchk_guard": false, 00:16:06.331 "hdgst": false, 00:16:06.331 "ddgst": false, 00:16:06.331 "psk": "key0", 00:16:06.331 "allow_unrecognized_csi": false, 00:16:06.331 "method": "bdev_nvme_attach_controller", 00:16:06.331 "req_id": 1 00:16:06.331 } 00:16:06.331 Got JSON-RPC error response 00:16:06.331 response: 00:16:06.331 { 00:16:06.332 "code": -5, 00:16:06.332 "message": "Input/output error" 00:16:06.332 } 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85736 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85736 ']' 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85736 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85736 00:16:06.332 killing process with pid 85736 00:16:06.332 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.332 00:16:06.332 Latency(us) 00:16:06.332 [2024-11-28T11:46:36.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.332 [2024-11-28T11:46:36.458Z] =================================================================================================================== 00:16:06.332 [2024-11-28T11:46:36.458Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85736' 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85736 00:16:06.332 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85736 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.l1qYNNYfkl 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.l1qYNNYfkl 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.l1qYNNYfkl 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.591 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l1qYNNYfkl 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85757 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85757 /var/tmp/bdevperf.sock 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85757 ']' 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.592 11:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 [2024-11-28 11:46:36.659592] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:06.592 [2024-11-28 11:46:36.659744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85757 ] 00:16:06.851 [2024-11-28 11:46:36.792247] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:06.851 [2024-11-28 11:46:36.811983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.851 [2024-11-28 11:46:36.864426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.851 [2024-11-28 11:46:36.921549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.787 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.787 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.787 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l1qYNNYfkl 00:16:08.047 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:08.306 [2024-11-28 11:46:38.233117] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:08.306 [2024-11-28 11:46:38.238165] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:08.306 [2024-11-28 11:46:38.238239] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:08.306 [2024-11-28 11:46:38.238324] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:08.306 [2024-11-28 11:46:38.238936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e700 (107): Transport endpoint is not connected 00:16:08.306 [2024-11-28 11:46:38.239887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e700 (9): Bad file descriptor 00:16:08.306 [2024-11-28 11:46:38.240883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:08.306 [2024-11-28 11:46:38.240903] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:08.306 [2024-11-28 11:46:38.240922] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:08.306 [2024-11-28 11:46:38.240936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:08.306 request: 00:16:08.306 { 00:16:08.306 "name": "TLSTEST", 00:16:08.306 "trtype": "tcp", 00:16:08.306 "traddr": "10.0.0.3", 00:16:08.306 "adrfam": "ipv4", 00:16:08.306 "trsvcid": "4420", 00:16:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:08.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:08.306 "prchk_reftag": false, 00:16:08.306 "prchk_guard": false, 00:16:08.306 "hdgst": false, 00:16:08.306 "ddgst": false, 00:16:08.306 "psk": "key0", 00:16:08.306 "allow_unrecognized_csi": false, 00:16:08.306 "method": "bdev_nvme_attach_controller", 00:16:08.306 "req_id": 1 00:16:08.306 } 00:16:08.306 Got JSON-RPC error response 00:16:08.306 response: 00:16:08.306 { 00:16:08.306 "code": -5, 00:16:08.306 "message": "Input/output error" 00:16:08.306 } 00:16:08.306 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85757 00:16:08.306 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85757 ']' 00:16:08.306 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85757 00:16:08.306 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85757 00:16:08.307 killing process with pid 85757 00:16:08.307 Received shutdown signal, test time was about 10.000000 seconds 00:16:08.307 00:16:08.307 Latency(us) 00:16:08.307 [2024-11-28T11:46:38.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.307 [2024-11-28T11:46:38.433Z] =================================================================================================================== 00:16:08.307 [2024-11-28T11:46:38.433Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85757' 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85757 00:16:08.307 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85757 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.l1qYNNYfkl 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.l1qYNNYfkl 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.l1qYNNYfkl 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:08.565 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l1qYNNYfkl 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85788 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85788 /var/tmp/bdevperf.sock 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85788 ']' 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.566 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.566 [2024-11-28 11:46:38.531124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:08.566 [2024-11-28 11:46:38.531634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85788 ] 00:16:08.566 [2024-11-28 11:46:38.654892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:08.566 [2024-11-28 11:46:38.681618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.825 [2024-11-28 11:46:38.726154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.825 [2024-11-28 11:46:38.781543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.394 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.394 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:09.394 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l1qYNNYfkl 00:16:09.653 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:09.913 [2024-11-28 11:46:39.909350] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.913 [2024-11-28 11:46:39.917674] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:09.913 [2024-11-28 11:46:39.918000] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:09.913 [2024-11-28 11:46:39.918308] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:09.913 [2024-11-28 11:46:39.918753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130b700 (107): Transport endpoint is not connected 00:16:09.913 [2024-11-28 11:46:39.919729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130b700 (9): Bad file descriptor 00:16:09.913 [2024-11-28 11:46:39.920725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:09.913 [2024-11-28 11:46:39.920903] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:09.913 [2024-11-28 11:46:39.921034] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:09.913 [2024-11-28 11:46:39.921297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:09.913 request: 00:16:09.913 { 00:16:09.913 "name": "TLSTEST", 00:16:09.913 "trtype": "tcp", 00:16:09.913 "traddr": "10.0.0.3", 00:16:09.913 "adrfam": "ipv4", 00:16:09.913 "trsvcid": "4420", 00:16:09.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:09.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.913 "prchk_reftag": false, 00:16:09.913 "prchk_guard": false, 00:16:09.913 "hdgst": false, 00:16:09.913 "ddgst": false, 00:16:09.913 "psk": "key0", 00:16:09.913 "allow_unrecognized_csi": false, 00:16:09.913 "method": "bdev_nvme_attach_controller", 00:16:09.913 "req_id": 1 00:16:09.913 } 00:16:09.913 Got JSON-RPC error response 00:16:09.913 response: 00:16:09.913 { 00:16:09.913 "code": -5, 00:16:09.913 "message": "Input/output error" 00:16:09.913 } 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85788 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85788 ']' 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85788 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85788 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85788' 00:16:09.913 killing process with pid 85788 00:16:09.913 Received shutdown signal, test time was about 10.000000 seconds 00:16:09.913 00:16:09.913 Latency(us) 00:16:09.913 [2024-11-28T11:46:40.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.913 [2024-11-28T11:46:40.039Z] =================================================================================================================== 00:16:09.913 [2024-11-28T11:46:40.039Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85788 00:16:09.913 11:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85788 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85816 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85816 /var/tmp/bdevperf.sock 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85816 ']' 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.173 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.173 [2024-11-28 11:46:40.231364] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:10.173 [2024-11-28 11:46:40.231689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85816 ] 00:16:10.432 [2024-11-28 11:46:40.360627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.432 [2024-11-28 11:46:40.387847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.432 [2024-11-28 11:46:40.437541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.432 [2024-11-28 11:46:40.492906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.432 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.432 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:10.432 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:10.691 [2024-11-28 11:46:40.813638] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:10.691 [2024-11-28 11:46:40.813722] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:10.951 request: 00:16:10.951 { 00:16:10.951 "name": "key0", 00:16:10.951 "path": "", 00:16:10.951 "method": "keyring_file_add_key", 00:16:10.951 "req_id": 1 00:16:10.951 } 00:16:10.951 Got JSON-RPC error response 00:16:10.951 response: 00:16:10.951 { 00:16:10.951 "code": -1, 00:16:10.951 "message": "Operation not permitted" 00:16:10.951 } 00:16:10.951 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:10.951 [2024-11-28 11:46:41.057818] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.951 [2024-11-28 11:46:41.057953] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:10.951 request: 00:16:10.951 { 00:16:10.951 "name": "TLSTEST", 00:16:10.951 "trtype": "tcp", 00:16:10.951 "traddr": "10.0.0.3", 00:16:10.951 "adrfam": "ipv4", 00:16:10.951 "trsvcid": "4420", 00:16:10.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.951 "prchk_reftag": false, 00:16:10.951 "prchk_guard": false, 00:16:10.951 "hdgst": false, 00:16:10.951 "ddgst": false, 00:16:10.951 "psk": "key0", 00:16:10.951 "allow_unrecognized_csi": false, 00:16:10.951 "method": "bdev_nvme_attach_controller", 00:16:10.951 "req_id": 1 00:16:10.951 } 00:16:10.951 Got JSON-RPC error response 00:16:10.951 response: 00:16:10.951 { 00:16:10.951 "code": -126, 00:16:10.951 "message": "Required key not available" 00:16:10.951 } 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85816 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85816 ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85816 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85816 00:16:11.211 killing process with pid 85816 00:16:11.211 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.211 00:16:11.211 Latency(us) 00:16:11.211 [2024-11-28T11:46:41.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.211 [2024-11-28T11:46:41.337Z] =================================================================================================================== 00:16:11.211 [2024-11-28T11:46:41.337Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85816' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85816 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85816 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 85363 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85363 ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85363 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85363 00:16:11.211 killing process with pid 85363 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85363' 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85363 00:16:11.211 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85363 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:11.470 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zzUp6hglUU 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zzUp6hglUU 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.729 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85853 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85853 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85853 ']' 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.730 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 [2024-11-28 11:46:41.713644] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:11.730 [2024-11-28 11:46:41.713755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.730 [2024-11-28 11:46:41.841560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:11.988 [2024-11-28 11:46:41.868560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.988 [2024-11-28 11:46:41.921249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.988 [2024-11-28 11:46:41.921378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.988 [2024-11-28 11:46:41.921411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.988 [2024-11-28 11:46:41.921420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.988 [2024-11-28 11:46:41.921428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.988 [2024-11-28 11:46:41.921906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.988 [2024-11-28 11:46:41.998486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zzUp6hglUU 00:16:12.925 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:12.925 [2024-11-28 11:46:43.005217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.925 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:13.184 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:13.443 [2024-11-28 11:46:43.485278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:13.443 [2024-11-28 11:46:43.485648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.443 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:13.701 malloc0 00:16:13.701 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:13.959 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:14.217 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zzUp6hglUU 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zzUp6hglUU 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85913 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85913 /var/tmp/bdevperf.sock 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85913 ']' 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:14.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.476 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.476 [2024-11-28 11:46:44.558980] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:14.476 [2024-11-28 11:46:44.559281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85913 ] 00:16:14.735 [2024-11-28 11:46:44.678660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:14.735 [2024-11-28 11:46:44.701761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.735 [2024-11-28 11:46:44.739197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.735 [2024-11-28 11:46:44.793395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.735 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.735 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:14.735 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:15.085 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:15.357 [2024-11-28 11:46:45.407941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:15.616 TLSTESTn1 00:16:15.616 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.616 Running I/O for 10 seconds... 00:16:17.928 3200.00 IOPS, 12.50 MiB/s [2024-11-28T11:46:48.987Z] 3136.00 IOPS, 12.25 MiB/s [2024-11-28T11:46:49.923Z] 3146.00 IOPS, 12.29 MiB/s [2024-11-28T11:46:50.858Z] 3131.50 IOPS, 12.23 MiB/s [2024-11-28T11:46:51.795Z] 3172.40 IOPS, 12.39 MiB/s [2024-11-28T11:46:52.731Z] 3176.83 IOPS, 12.41 MiB/s [2024-11-28T11:46:53.693Z] 3142.43 IOPS, 12.28 MiB/s [2024-11-28T11:46:54.656Z] 3146.62 IOPS, 12.29 MiB/s [2024-11-28T11:46:56.045Z] 3160.89 IOPS, 12.35 MiB/s [2024-11-28T11:46:56.045Z] 3179.10 IOPS, 12.42 MiB/s 00:16:25.919 Latency(us) 00:16:25.919 [2024-11-28T11:46:56.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.919 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:25.919 Verification LBA range: start 0x0 length 0x2000 00:16:25.919 TLSTESTn1 : 10.02 3186.18 12.45 0.00 0.00 40106.93 6434.44 35985.22 00:16:25.919 [2024-11-28T11:46:56.045Z] =================================================================================================================== 00:16:25.919 [2024-11-28T11:46:56.045Z] Total : 3186.18 12.45 0.00 0.00 40106.93 6434.44 35985.22 00:16:25.919 { 00:16:25.919 "results": [ 00:16:25.919 { 00:16:25.919 "job": "TLSTESTn1", 00:16:25.919 "core_mask": "0x4", 00:16:25.919 "workload": "verify", 00:16:25.919 "status": "finished", 00:16:25.919 "verify_range": { 00:16:25.919 "start": 0, 00:16:25.919 "length": 8192 00:16:25.919 }, 00:16:25.919 "queue_depth": 128, 00:16:25.919 "io_size": 4096, 00:16:25.919 "runtime": 10.017318, 00:16:25.919 "iops": 3186.1821697184814, 00:16:25.919 "mibps": 12.446024100462818, 00:16:25.919 "io_failed": 0, 00:16:25.919 "io_timeout": 0, 00:16:25.919 "avg_latency_us": 40106.92665738122, 00:16:25.919 "min_latency_us": 6434.443636363636, 00:16:25.919 "max_latency_us": 35985.22181818182 00:16:25.919 } 00:16:25.919 ], 00:16:25.919 "core_count": 1 00:16:25.919 } 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85913 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85913 ']' 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85913 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85913 00:16:25.919 killing process with pid 85913 00:16:25.919 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.919 00:16:25.919 Latency(us) 00:16:25.919 [2024-11-28T11:46:56.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.919 [2024-11-28T11:46:56.045Z] =================================================================================================================== 00:16:25.919 [2024-11-28T11:46:56.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85913' 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85913 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85913 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zzUp6hglUU 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zzUp6hglUU 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zzUp6hglUU 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zzUp6hglUU 00:16:25.919 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zzUp6hglUU 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86039 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86039 /var/tmp/bdevperf.sock 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86039 ']' 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.920 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.920 [2024-11-28 11:46:55.954137] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:25.920 [2024-11-28 11:46:55.954556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86039 ] 00:16:26.179 [2024-11-28 11:46:56.077903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.179 [2024-11-28 11:46:56.105176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.179 [2024-11-28 11:46:56.144708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.179 [2024-11-28 11:46:56.200854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.117 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.117 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:27.117 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:27.117 [2024-11-28 11:46:57.149589] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zzUp6hglUU': 0100666 00:16:27.117 [2024-11-28 11:46:57.149889] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:27.117 request: 00:16:27.117 { 00:16:27.117 "name": "key0", 00:16:27.117 "path": "/tmp/tmp.zzUp6hglUU", 00:16:27.117 "method": "keyring_file_add_key", 00:16:27.117 "req_id": 1 00:16:27.117 } 00:16:27.117 Got JSON-RPC error response 00:16:27.117 response: 00:16:27.117 { 00:16:27.117 "code": -1, 00:16:27.117 "message": "Operation not permitted" 00:16:27.117 } 00:16:27.117 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:27.376 [2024-11-28 11:46:57.389797] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.377 [2024-11-28 11:46:57.389889] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:27.377 request: 00:16:27.377 { 00:16:27.377 "name": "TLSTEST", 00:16:27.377 "trtype": "tcp", 00:16:27.377 "traddr": "10.0.0.3", 00:16:27.377 "adrfam": "ipv4", 00:16:27.377 "trsvcid": "4420", 00:16:27.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.377 "prchk_reftag": false, 00:16:27.377 "prchk_guard": false, 00:16:27.377 "hdgst": false, 00:16:27.377 "ddgst": false, 00:16:27.377 "psk": "key0", 00:16:27.377 "allow_unrecognized_csi": false, 00:16:27.377 "method": "bdev_nvme_attach_controller", 00:16:27.377 "req_id": 1 00:16:27.377 } 00:16:27.377 Got JSON-RPC error response 00:16:27.377 response: 00:16:27.377 { 00:16:27.377 "code": -126, 00:16:27.377 "message": "Required key not available" 00:16:27.377 } 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86039 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86039 ']' 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86039 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86039 00:16:27.377 killing process with pid 86039 00:16:27.377 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.377 00:16:27.377 Latency(us) 00:16:27.377 [2024-11-28T11:46:57.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.377 [2024-11-28T11:46:57.503Z] =================================================================================================================== 00:16:27.377 [2024-11-28T11:46:57.503Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86039' 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86039 00:16:27.377 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86039 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 85853 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85853 ']' 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85853 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85853 00:16:27.637 killing process with pid 85853 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85853' 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85853 00:16:27.637 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85853 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86078 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86078 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86078 ']' 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.897 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.156 [2024-11-28 11:46:58.041902] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:28.156 [2024-11-28 11:46:58.042288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.156 [2024-11-28 11:46:58.180289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:28.156 [2024-11-28 11:46:58.198975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.156 [2024-11-28 11:46:58.258472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.156 [2024-11-28 11:46:58.258756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.156 [2024-11-28 11:46:58.258776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.156 [2024-11-28 11:46:58.258785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.156 [2024-11-28 11:46:58.258792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.156 [2024-11-28 11:46:58.259280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.415 [2024-11-28 11:46:58.331655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zzUp6hglUU 00:16:28.984 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.243 [2024-11-28 11:46:59.216447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.243 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:29.502 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:29.761 [2024-11-28 11:46:59.696583] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:29.761 [2024-11-28 11:46:59.696902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.761 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:30.019 malloc0 00:16:30.019 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.278 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:30.536 [2024-11-28 11:47:00.435665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zzUp6hglUU': 0100666 00:16:30.536 [2024-11-28 11:47:00.435727] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:30.536 request: 00:16:30.536 { 00:16:30.536 "name": "key0", 00:16:30.536 "path": "/tmp/tmp.zzUp6hglUU", 00:16:30.536 "method": "keyring_file_add_key", 00:16:30.536 "req_id": 1 00:16:30.536 } 00:16:30.536 Got JSON-RPC error response 00:16:30.536 response: 00:16:30.536 { 00:16:30.536 "code": -1, 00:16:30.536 "message": "Operation not permitted" 00:16:30.536 } 00:16:30.536 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:30.794 [2024-11-28 11:47:00.683768] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:30.794 [2024-11-28 11:47:00.683862] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:30.794 request: 00:16:30.794 { 00:16:30.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.794 "host": "nqn.2016-06.io.spdk:host1", 00:16:30.794 "psk": "key0", 00:16:30.794 "method": "nvmf_subsystem_add_host", 00:16:30.794 "req_id": 1 00:16:30.794 } 00:16:30.794 Got JSON-RPC error response 00:16:30.794 response: 00:16:30.794 { 00:16:30.794 "code": -32603, 00:16:30.794 "message": "Internal error" 00:16:30.794 } 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 86078 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86078 ']' 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86078 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86078 00:16:30.794 killing process with pid 86078 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:30.794 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86078' 00:16:30.795 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86078 00:16:30.795 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86078 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zzUp6hglUU 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86147 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86147 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86147 ']' 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.053 11:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.053 [2024-11-28 11:47:01.084766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:31.053 [2024-11-28 11:47:01.084876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.312 [2024-11-28 11:47:01.212483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.312 [2024-11-28 11:47:01.238680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.312 [2024-11-28 11:47:01.297186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.312 [2024-11-28 11:47:01.297248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.312 [2024-11-28 11:47:01.297275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.312 [2024-11-28 11:47:01.297283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.312 [2024-11-28 11:47:01.297290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.312 [2024-11-28 11:47:01.297813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.312 [2024-11-28 11:47:01.370144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zzUp6hglUU 00:16:32.252 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:32.531 [2024-11-28 11:47:02.434231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.531 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:32.802 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:33.060 [2024-11-28 11:47:03.002400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:33.060 [2024-11-28 11:47:03.002897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:33.060 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:33.318 malloc0 00:16:33.318 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:33.577 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:33.836 11:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:34.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=86207 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 86207 /var/tmp/bdevperf.sock 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86207 ']' 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.095 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.095 [2024-11-28 11:47:04.066837] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:34.095 [2024-11-28 11:47:04.067152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86207 ] 00:16:34.095 [2024-11-28 11:47:04.189278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:34.095 [2024-11-28 11:47:04.219138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.354 [2024-11-28 11:47:04.264381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.354 [2024-11-28 11:47:04.320247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:34.922 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.922 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:34.922 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:35.180 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:35.437 [2024-11-28 11:47:05.497663] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.695 TLSTESTn1 00:16:35.695 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:35.954 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:35.954 "subsystems": [ 00:16:35.954 { 00:16:35.954 "subsystem": "keyring", 00:16:35.954 "config": [ 00:16:35.954 { 00:16:35.954 "method": "keyring_file_add_key", 00:16:35.954 "params": { 00:16:35.954 "name": "key0", 00:16:35.954 "path": "/tmp/tmp.zzUp6hglUU" 00:16:35.954 } 00:16:35.954 } 00:16:35.954 ] 00:16:35.954 }, 00:16:35.954 { 00:16:35.954 "subsystem": "iobuf", 00:16:35.954 "config": [ 00:16:35.954 { 00:16:35.954 "method": "iobuf_set_options", 00:16:35.954 "params": { 00:16:35.954 "small_pool_count": 8192, 00:16:35.954 "large_pool_count": 1024, 00:16:35.954 "small_bufsize": 8192, 00:16:35.954 "large_bufsize": 135168, 00:16:35.954 "enable_numa": false 00:16:35.954 } 00:16:35.954 } 00:16:35.954 ] 00:16:35.954 }, 00:16:35.954 { 00:16:35.954 "subsystem": "sock", 00:16:35.954 "config": [ 00:16:35.954 { 00:16:35.954 "method": "sock_set_default_impl", 00:16:35.954 "params": { 00:16:35.954 "impl_name": "uring" 00:16:35.954 } 00:16:35.954 }, 00:16:35.954 { 00:16:35.954 "method": "sock_impl_set_options", 00:16:35.954 "params": { 00:16:35.954 "impl_name": "ssl", 00:16:35.954 "recv_buf_size": 4096, 00:16:35.954 "send_buf_size": 4096, 00:16:35.954 "enable_recv_pipe": true, 00:16:35.954 "enable_quickack": false, 00:16:35.954 "enable_placement_id": 0, 00:16:35.954 "enable_zerocopy_send_server": true, 00:16:35.954 "enable_zerocopy_send_client": false, 00:16:35.954 "zerocopy_threshold": 0, 00:16:35.954 "tls_version": 0, 00:16:35.954 "enable_ktls": false 00:16:35.954 } 00:16:35.954 }, 00:16:35.954 { 00:16:35.954 "method": "sock_impl_set_options", 00:16:35.954 "params": { 00:16:35.954 "impl_name": "posix", 00:16:35.954 "recv_buf_size": 2097152, 00:16:35.954 "send_buf_size": 2097152, 00:16:35.954 "enable_recv_pipe": true, 00:16:35.954 "enable_quickack": false, 00:16:35.954 "enable_placement_id": 0, 00:16:35.954 "enable_zerocopy_send_server": true, 00:16:35.954 "enable_zerocopy_send_client": false, 00:16:35.954 "zerocopy_threshold": 0, 00:16:35.954 "tls_version": 0, 00:16:35.955 "enable_ktls": false 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "sock_impl_set_options", 00:16:35.955 "params": { 00:16:35.955 "impl_name": "uring", 00:16:35.955 "recv_buf_size": 2097152, 00:16:35.955 "send_buf_size": 2097152, 00:16:35.955 "enable_recv_pipe": true, 00:16:35.955 "enable_quickack": false, 00:16:35.955 "enable_placement_id": 0, 00:16:35.955 "enable_zerocopy_send_server": false, 00:16:35.955 "enable_zerocopy_send_client": false, 00:16:35.955 "zerocopy_threshold": 0, 00:16:35.955 "tls_version": 0, 00:16:35.955 "enable_ktls": false 00:16:35.955 } 00:16:35.955 } 00:16:35.955 ] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "vmd", 00:16:35.955 "config": [] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "accel", 00:16:35.955 "config": [ 00:16:35.955 { 00:16:35.955 "method": "accel_set_options", 00:16:35.955 "params": { 00:16:35.955 "small_cache_size": 128, 00:16:35.955 "large_cache_size": 16, 00:16:35.955 "task_count": 2048, 00:16:35.955 "sequence_count": 2048, 00:16:35.955 "buf_count": 2048 00:16:35.955 } 00:16:35.955 } 00:16:35.955 ] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "bdev", 00:16:35.955 "config": [ 00:16:35.955 { 00:16:35.955 "method": "bdev_set_options", 00:16:35.955 "params": { 00:16:35.955 "bdev_io_pool_size": 65535, 00:16:35.955 "bdev_io_cache_size": 256, 00:16:35.955 "bdev_auto_examine": true, 00:16:35.955 "iobuf_small_cache_size": 128, 00:16:35.955 "iobuf_large_cache_size": 16 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_raid_set_options", 00:16:35.955 "params": { 00:16:35.955 "process_window_size_kb": 1024, 00:16:35.955 "process_max_bandwidth_mb_sec": 0 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_iscsi_set_options", 00:16:35.955 "params": { 00:16:35.955 "timeout_sec": 30 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_nvme_set_options", 00:16:35.955 "params": { 00:16:35.955 "action_on_timeout": "none", 00:16:35.955 "timeout_us": 0, 00:16:35.955 "timeout_admin_us": 0, 00:16:35.955 "keep_alive_timeout_ms": 10000, 00:16:35.955 "arbitration_burst": 0, 00:16:35.955 "low_priority_weight": 0, 00:16:35.955 "medium_priority_weight": 0, 00:16:35.955 "high_priority_weight": 0, 00:16:35.955 "nvme_adminq_poll_period_us": 10000, 00:16:35.955 "nvme_ioq_poll_period_us": 0, 00:16:35.955 "io_queue_requests": 0, 00:16:35.955 "delay_cmd_submit": true, 00:16:35.955 "transport_retry_count": 4, 00:16:35.955 "bdev_retry_count": 3, 00:16:35.955 "transport_ack_timeout": 0, 00:16:35.955 "ctrlr_loss_timeout_sec": 0, 00:16:35.955 "reconnect_delay_sec": 0, 00:16:35.955 "fast_io_fail_timeout_sec": 0, 00:16:35.955 "disable_auto_failback": false, 00:16:35.955 "generate_uuids": false, 00:16:35.955 "transport_tos": 0, 00:16:35.955 "nvme_error_stat": false, 00:16:35.955 "rdma_srq_size": 0, 00:16:35.955 "io_path_stat": false, 00:16:35.955 "allow_accel_sequence": false, 00:16:35.955 "rdma_max_cq_size": 0, 00:16:35.955 "rdma_cm_event_timeout_ms": 0, 00:16:35.955 "dhchap_digests": [ 00:16:35.955 "sha256", 00:16:35.955 "sha384", 00:16:35.955 "sha512" 00:16:35.955 ], 00:16:35.955 "dhchap_dhgroups": [ 00:16:35.955 "null", 00:16:35.955 "ffdhe2048", 00:16:35.955 "ffdhe3072", 00:16:35.955 "ffdhe4096", 00:16:35.955 "ffdhe6144", 00:16:35.955 "ffdhe8192" 00:16:35.955 ] 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_nvme_set_hotplug", 00:16:35.955 "params": { 00:16:35.955 "period_us": 100000, 00:16:35.955 "enable": false 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_malloc_create", 00:16:35.955 "params": { 00:16:35.955 "name": "malloc0", 00:16:35.955 "num_blocks": 8192, 00:16:35.955 "block_size": 4096, 00:16:35.955 "physical_block_size": 4096, 00:16:35.955 "uuid": "950bf216-f20b-4f3a-8a09-dae2836915d0", 00:16:35.955 "optimal_io_boundary": 0, 00:16:35.955 "md_size": 0, 00:16:35.955 "dif_type": 0, 00:16:35.955 "dif_is_head_of_md": false, 00:16:35.955 "dif_pi_format": 0 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "bdev_wait_for_examine" 00:16:35.955 } 00:16:35.955 ] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "nbd", 00:16:35.955 "config": [] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "scheduler", 00:16:35.955 "config": [ 00:16:35.955 { 00:16:35.955 "method": "framework_set_scheduler", 00:16:35.955 "params": { 00:16:35.955 "name": "static" 00:16:35.955 } 00:16:35.955 } 00:16:35.955 ] 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "subsystem": "nvmf", 00:16:35.955 "config": [ 00:16:35.955 { 00:16:35.955 "method": "nvmf_set_config", 00:16:35.955 "params": { 00:16:35.955 "discovery_filter": "match_any", 00:16:35.955 "admin_cmd_passthru": { 00:16:35.955 "identify_ctrlr": false 00:16:35.955 }, 00:16:35.955 "dhchap_digests": [ 00:16:35.955 "sha256", 00:16:35.955 "sha384", 00:16:35.955 "sha512" 00:16:35.955 ], 00:16:35.955 "dhchap_dhgroups": [ 00:16:35.955 "null", 00:16:35.955 "ffdhe2048", 00:16:35.955 "ffdhe3072", 00:16:35.955 "ffdhe4096", 00:16:35.955 "ffdhe6144", 00:16:35.955 "ffdhe8192" 00:16:35.955 ] 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_set_max_subsystems", 00:16:35.955 "params": { 00:16:35.955 "max_subsystems": 1024 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_set_crdt", 00:16:35.955 "params": { 00:16:35.955 "crdt1": 0, 00:16:35.955 "crdt2": 0, 00:16:35.955 "crdt3": 0 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_create_transport", 00:16:35.955 "params": { 00:16:35.955 "trtype": "TCP", 00:16:35.955 "max_queue_depth": 128, 00:16:35.955 "max_io_qpairs_per_ctrlr": 127, 00:16:35.955 "in_capsule_data_size": 4096, 00:16:35.955 "max_io_size": 131072, 00:16:35.955 "io_unit_size": 131072, 00:16:35.955 "max_aq_depth": 128, 00:16:35.955 "num_shared_buffers": 511, 00:16:35.955 "buf_cache_size": 4294967295, 00:16:35.955 "dif_insert_or_strip": false, 00:16:35.955 "zcopy": false, 00:16:35.955 "c2h_success": false, 00:16:35.955 "sock_priority": 0, 00:16:35.955 "abort_timeout_sec": 1, 00:16:35.955 "ack_timeout": 0, 00:16:35.955 "data_wr_pool_size": 0 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_create_subsystem", 00:16:35.955 "params": { 00:16:35.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.955 "allow_any_host": false, 00:16:35.955 "serial_number": "SPDK00000000000001", 00:16:35.955 "model_number": "SPDK bdev Controller", 00:16:35.955 "max_namespaces": 10, 00:16:35.955 "min_cntlid": 1, 00:16:35.955 "max_cntlid": 65519, 00:16:35.955 "ana_reporting": false 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_subsystem_add_host", 00:16:35.955 "params": { 00:16:35.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.955 "host": "nqn.2016-06.io.spdk:host1", 00:16:35.955 "psk": "key0" 00:16:35.955 } 00:16:35.955 }, 00:16:35.955 { 00:16:35.955 "method": "nvmf_subsystem_add_ns", 00:16:35.955 "params": { 00:16:35.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.955 "namespace": { 00:16:35.955 "nsid": 1, 00:16:35.955 "bdev_name": "malloc0", 00:16:35.955 "nguid": "950BF216F20B4F3A8A09DAE2836915D0", 00:16:35.955 "uuid": "950bf216-f20b-4f3a-8a09-dae2836915d0", 00:16:35.955 "no_auto_visible": false 00:16:35.956 } 00:16:35.956 } 00:16:35.956 }, 00:16:35.956 { 00:16:35.956 "method": "nvmf_subsystem_add_listener", 00:16:35.956 "params": { 00:16:35.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.956 "listen_address": { 00:16:35.956 "trtype": "TCP", 00:16:35.956 "adrfam": "IPv4", 00:16:35.956 "traddr": "10.0.0.3", 00:16:35.956 "trsvcid": "4420" 00:16:35.956 }, 00:16:35.956 "secure_channel": true 00:16:35.956 } 00:16:35.956 } 00:16:35.956 ] 00:16:35.956 } 00:16:35.956 ] 00:16:35.956 }' 00:16:35.956 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:36.215 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:36.215 "subsystems": [ 00:16:36.215 { 00:16:36.215 "subsystem": "keyring", 00:16:36.215 "config": [ 00:16:36.215 { 00:16:36.215 "method": "keyring_file_add_key", 00:16:36.215 "params": { 00:16:36.215 "name": "key0", 00:16:36.215 "path": "/tmp/tmp.zzUp6hglUU" 00:16:36.215 } 00:16:36.215 } 00:16:36.215 ] 00:16:36.215 }, 00:16:36.215 { 00:16:36.215 "subsystem": "iobuf", 00:16:36.215 "config": [ 00:16:36.215 { 00:16:36.215 "method": "iobuf_set_options", 00:16:36.215 "params": { 00:16:36.215 "small_pool_count": 8192, 00:16:36.215 "large_pool_count": 1024, 00:16:36.215 "small_bufsize": 8192, 00:16:36.215 "large_bufsize": 135168, 00:16:36.215 "enable_numa": false 00:16:36.215 } 00:16:36.215 } 00:16:36.215 ] 00:16:36.215 }, 00:16:36.215 { 00:16:36.215 "subsystem": "sock", 00:16:36.215 "config": [ 00:16:36.215 { 00:16:36.215 "method": "sock_set_default_impl", 00:16:36.215 "params": { 00:16:36.215 "impl_name": "uring" 00:16:36.215 } 00:16:36.215 }, 00:16:36.215 { 00:16:36.215 "method": "sock_impl_set_options", 00:16:36.215 "params": { 00:16:36.215 "impl_name": "ssl", 00:16:36.215 "recv_buf_size": 4096, 00:16:36.215 "send_buf_size": 4096, 00:16:36.215 "enable_recv_pipe": true, 00:16:36.215 "enable_quickack": false, 00:16:36.215 "enable_placement_id": 0, 00:16:36.215 "enable_zerocopy_send_server": true, 00:16:36.215 "enable_zerocopy_send_client": false, 00:16:36.215 "zerocopy_threshold": 0, 00:16:36.215 "tls_version": 0, 00:16:36.215 "enable_ktls": false 00:16:36.215 } 00:16:36.215 }, 00:16:36.215 { 00:16:36.215 "method": "sock_impl_set_options", 00:16:36.215 "params": { 00:16:36.215 "impl_name": "posix", 00:16:36.215 "recv_buf_size": 2097152, 00:16:36.215 "send_buf_size": 2097152, 00:16:36.215 "enable_recv_pipe": true, 00:16:36.215 "enable_quickack": false, 00:16:36.215 "enable_placement_id": 0, 00:16:36.215 "enable_zerocopy_send_server": true, 00:16:36.215 "enable_zerocopy_send_client": false, 00:16:36.215 "zerocopy_threshold": 0, 00:16:36.215 "tls_version": 0, 00:16:36.215 "enable_ktls": false 00:16:36.215 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "sock_impl_set_options", 00:16:36.216 "params": { 00:16:36.216 "impl_name": "uring", 00:16:36.216 "recv_buf_size": 2097152, 00:16:36.216 "send_buf_size": 2097152, 00:16:36.216 "enable_recv_pipe": true, 00:16:36.216 "enable_quickack": false, 00:16:36.216 "enable_placement_id": 0, 00:16:36.216 "enable_zerocopy_send_server": false, 00:16:36.216 "enable_zerocopy_send_client": false, 00:16:36.216 "zerocopy_threshold": 0, 00:16:36.216 "tls_version": 0, 00:16:36.216 "enable_ktls": false 00:16:36.216 } 00:16:36.216 } 00:16:36.216 ] 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "subsystem": "vmd", 00:16:36.216 "config": [] 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "subsystem": "accel", 00:16:36.216 "config": [ 00:16:36.216 { 00:16:36.216 "method": "accel_set_options", 00:16:36.216 "params": { 00:16:36.216 "small_cache_size": 128, 00:16:36.216 "large_cache_size": 16, 00:16:36.216 "task_count": 2048, 00:16:36.216 "sequence_count": 2048, 00:16:36.216 "buf_count": 2048 00:16:36.216 } 00:16:36.216 } 00:16:36.216 ] 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "subsystem": "bdev", 00:16:36.216 "config": [ 00:16:36.216 { 00:16:36.216 "method": "bdev_set_options", 00:16:36.216 "params": { 00:16:36.216 "bdev_io_pool_size": 65535, 00:16:36.216 "bdev_io_cache_size": 256, 00:16:36.216 "bdev_auto_examine": true, 00:16:36.216 "iobuf_small_cache_size": 128, 00:16:36.216 "iobuf_large_cache_size": 16 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_raid_set_options", 00:16:36.216 "params": { 00:16:36.216 "process_window_size_kb": 1024, 00:16:36.216 "process_max_bandwidth_mb_sec": 0 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_iscsi_set_options", 00:16:36.216 "params": { 00:16:36.216 "timeout_sec": 30 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_nvme_set_options", 00:16:36.216 "params": { 00:16:36.216 "action_on_timeout": "none", 00:16:36.216 "timeout_us": 0, 00:16:36.216 "timeout_admin_us": 0, 00:16:36.216 "keep_alive_timeout_ms": 10000, 00:16:36.216 "arbitration_burst": 0, 00:16:36.216 "low_priority_weight": 0, 00:16:36.216 "medium_priority_weight": 0, 00:16:36.216 "high_priority_weight": 0, 00:16:36.216 "nvme_adminq_poll_period_us": 10000, 00:16:36.216 "nvme_ioq_poll_period_us": 0, 00:16:36.216 "io_queue_requests": 512, 00:16:36.216 "delay_cmd_submit": true, 00:16:36.216 "transport_retry_count": 4, 00:16:36.216 "bdev_retry_count": 3, 00:16:36.216 "transport_ack_timeout": 0, 00:16:36.216 "ctrlr_loss_timeout_sec": 0, 00:16:36.216 "reconnect_delay_sec": 0, 00:16:36.216 "fast_io_fail_timeout_sec": 0, 00:16:36.216 "disable_auto_failback": false, 00:16:36.216 "generate_uuids": false, 00:16:36.216 "transport_tos": 0, 00:16:36.216 "nvme_error_stat": false, 00:16:36.216 "rdma_srq_size": 0, 00:16:36.216 "io_path_stat": false, 00:16:36.216 "allow_accel_sequence": false, 00:16:36.216 "rdma_max_cq_size": 0, 00:16:36.216 "rdma_cm_event_timeout_ms": 0, 00:16:36.216 "dhchap_digests": [ 00:16:36.216 "sha256", 00:16:36.216 "sha384", 00:16:36.216 "sha512" 00:16:36.216 ], 00:16:36.216 "dhchap_dhgroups": [ 00:16:36.216 "null", 00:16:36.216 "ffdhe2048", 00:16:36.216 "ffdhe3072", 00:16:36.216 "ffdhe4096", 00:16:36.216 "ffdhe6144", 00:16:36.216 "ffdhe8192" 00:16:36.216 ] 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_nvme_attach_controller", 00:16:36.216 "params": { 00:16:36.216 "name": "TLSTEST", 00:16:36.216 "trtype": "TCP", 00:16:36.216 "adrfam": "IPv4", 00:16:36.216 "traddr": "10.0.0.3", 00:16:36.216 "trsvcid": "4420", 00:16:36.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.216 "prchk_reftag": false, 00:16:36.216 "prchk_guard": false, 00:16:36.216 "ctrlr_loss_timeout_sec": 0, 00:16:36.216 "reconnect_delay_sec": 0, 00:16:36.216 "fast_io_fail_timeout_sec": 0, 00:16:36.216 "psk": "key0", 00:16:36.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.216 "hdgst": false, 00:16:36.216 "ddgst": false, 00:16:36.216 "multipath": "multipath" 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_nvme_set_hotplug", 00:16:36.216 "params": { 00:16:36.216 "period_us": 100000, 00:16:36.216 "enable": false 00:16:36.216 } 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "method": "bdev_wait_for_examine" 00:16:36.216 } 00:16:36.216 ] 00:16:36.216 }, 00:16:36.216 { 00:16:36.216 "subsystem": "nbd", 00:16:36.216 "config": [] 00:16:36.216 } 00:16:36.216 ] 00:16:36.216 }' 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 86207 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86207 ']' 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86207 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86207 00:16:36.216 killing process with pid 86207 00:16:36.216 Received shutdown signal, test time was about 10.000000 seconds 00:16:36.216 00:16:36.216 Latency(us) 00:16:36.216 [2024-11-28T11:47:06.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.216 [2024-11-28T11:47:06.342Z] =================================================================================================================== 00:16:36.216 [2024-11-28T11:47:06.342Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86207' 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86207 00:16:36.216 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86207 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 86147 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86147 ']' 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86147 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86147 00:16:36.476 killing process with pid 86147 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86147' 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86147 00:16:36.476 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86147 00:16:36.735 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:36.735 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.735 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:36.735 "subsystems": [ 00:16:36.735 { 00:16:36.735 "subsystem": "keyring", 00:16:36.735 "config": [ 00:16:36.735 { 00:16:36.735 "method": "keyring_file_add_key", 00:16:36.735 "params": { 00:16:36.735 "name": "key0", 00:16:36.735 "path": "/tmp/tmp.zzUp6hglUU" 00:16:36.735 } 00:16:36.735 } 00:16:36.735 ] 00:16:36.735 }, 00:16:36.735 { 00:16:36.735 "subsystem": "iobuf", 00:16:36.735 "config": [ 00:16:36.735 { 00:16:36.735 "method": "iobuf_set_options", 00:16:36.735 "params": { 00:16:36.735 "small_pool_count": 8192, 00:16:36.735 "large_pool_count": 1024, 00:16:36.735 "small_bufsize": 8192, 00:16:36.735 "large_bufsize": 135168, 00:16:36.735 "enable_numa": false 00:16:36.735 } 00:16:36.735 } 00:16:36.735 ] 00:16:36.735 }, 00:16:36.735 { 00:16:36.735 "subsystem": "sock", 00:16:36.735 "config": [ 00:16:36.735 { 00:16:36.735 "method": "sock_set_default_impl", 00:16:36.735 "params": { 00:16:36.735 "impl_name": "uring" 00:16:36.735 } 00:16:36.735 }, 00:16:36.735 { 00:16:36.735 "method": "sock_impl_set_options", 00:16:36.735 "params": { 00:16:36.735 "impl_name": "ssl", 00:16:36.735 "recv_buf_size": 4096, 00:16:36.735 "send_buf_size": 4096, 00:16:36.735 "enable_recv_pipe": true, 00:16:36.735 "enable_quickack": false, 00:16:36.735 "enable_placement_id": 0, 00:16:36.735 "enable_zerocopy_send_server": true, 00:16:36.735 "enable_zerocopy_send_client": false, 00:16:36.735 "zerocopy_threshold": 0, 00:16:36.735 "tls_version": 0, 00:16:36.735 "enable_ktls": false 00:16:36.735 } 00:16:36.735 }, 00:16:36.735 { 00:16:36.735 "method": "sock_impl_set_options", 00:16:36.735 "params": { 00:16:36.735 "impl_name": "posix", 00:16:36.735 "recv_buf_size": 2097152, 00:16:36.735 "send_buf_size": 2097152, 00:16:36.735 "enable_recv_pipe": true, 00:16:36.735 "enable_quickack": false, 00:16:36.735 "enable_placement_id": 0, 00:16:36.735 "enable_zerocopy_send_server": true, 00:16:36.735 "enable_zerocopy_send_client": false, 00:16:36.735 "zerocopy_threshold": 0, 00:16:36.736 "tls_version": 0, 00:16:36.736 "enable_ktls": false 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "sock_impl_set_options", 00:16:36.736 "params": { 00:16:36.736 "impl_name": "uring", 00:16:36.736 "recv_buf_size": 2097152, 00:16:36.736 "send_buf_size": 2097152, 00:16:36.736 "enable_recv_pipe": true, 00:16:36.736 "enable_quickack": false, 00:16:36.736 "enable_placement_id": 0, 00:16:36.736 "enable_zerocopy_send_server": false, 00:16:36.736 "enable_zerocopy_send_client": false, 00:16:36.736 "zerocopy_threshold": 0, 00:16:36.736 "tls_version": 0, 00:16:36.736 "enable_ktls": false 00:16:36.736 } 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "vmd", 00:16:36.736 "config": [] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "accel", 00:16:36.736 "config": [ 00:16:36.736 { 00:16:36.736 "method": "accel_set_options", 00:16:36.736 "params": { 00:16:36.736 "small_cache_size": 128, 00:16:36.736 "large_cache_size": 16, 00:16:36.736 "task_count": 2048, 00:16:36.736 "sequence_count": 2048, 00:16:36.736 "buf_count": 2048 00:16:36.736 } 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "bdev", 00:16:36.736 "config": [ 00:16:36.736 { 00:16:36.736 "method": "bdev_set_options", 00:16:36.736 "params": { 00:16:36.736 "bdev_io_pool_size": 65535, 00:16:36.736 "bdev_io_cache_size": 256, 00:16:36.736 "bdev_auto_examine": true, 00:16:36.736 "iobuf_small_cache_size": 128, 00:16:36.736 "iobuf_large_cache_size": 16 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_raid_set_options", 00:16:36.736 "params": { 00:16:36.736 "process_window_size_kb": 1024, 00:16:36.736 "process_max_bandwidth_mb_sec": 0 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_iscsi_set_options", 00:16:36.736 "params": { 00:16:36.736 "timeout_sec": 30 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_nvme_set_options", 00:16:36.736 "params": { 00:16:36.736 "action_on_timeout": "none", 00:16:36.736 "timeout_us": 0, 00:16:36.736 "timeout_admin_us": 0, 00:16:36.736 "keep_alive_timeout_ms": 10000, 00:16:36.736 "arbitration_burst": 0, 00:16:36.736 "low_priority_weight": 0, 00:16:36.736 "medium_priority_weight": 0, 00:16:36.736 "high_priority_weight": 0, 00:16:36.736 "nvme_adminq_poll_period_us": 10000, 00:16:36.736 "nvme_ioq_poll_period_us": 0, 00:16:36.736 "io_queue_requests": 0, 00:16:36.736 "delay_cmd_submit": true, 00:16:36.736 "transport_retry_count": 4, 00:16:36.736 "bdev_retry_count": 3, 00:16:36.736 "transport_ack_timeout": 0, 00:16:36.736 "ctrlr_loss_timeout_sec": 0, 00:16:36.736 "reconnect_delay_sec": 0, 00:16:36.736 "fast_io_fail_timeout_sec": 0, 00:16:36.736 "disable_auto_failback": false, 00:16:36.736 "generate_uuids": false, 00:16:36.736 "transport_tos": 0, 00:16:36.736 "nvme_error_stat": false, 00:16:36.736 "rdma_srq_size": 0, 00:16:36.736 "io_path_stat": false, 00:16:36.736 "allow_accel_sequence": false, 00:16:36.736 "rdma_max_cq_size": 0, 00:16:36.736 "rdma_cm_event_timeout_ms": 0, 00:16:36.736 "dhchap_digests": [ 00:16:36.736 "sha256", 00:16:36.736 "sha384", 00:16:36.736 "sha512" 00:16:36.736 ], 00:16:36.736 "dhchap_dhgroups": [ 00:16:36.736 "null", 00:16:36.736 "ffdhe2048", 00:16:36.736 "ffdhe3072", 00:16:36.736 "ffdhe4096", 00:16:36.736 "ffdhe6144", 00:16:36.736 "ffdhe8192" 00:16:36.736 ] 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_nvme_set_hotplug", 00:16:36.736 "params": { 00:16:36.736 "period_us": 100000, 00:16:36.736 "enable": false 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_malloc_create", 00:16:36.736 "params": { 00:16:36.736 "name": "malloc0", 00:16:36.736 "num_blocks": 8192, 00:16:36.736 "block_size": 4096, 00:16:36.736 "physical_block_size": 4096, 00:16:36.736 "uuid": "950bf216-f20b-4f3a-8a09-dae2836915d0", 00:16:36.736 "optimal_io_boundary": 0, 00:16:36.736 "md_size": 0, 00:16:36.736 "dif_type": 0, 00:16:36.736 "dif_is_head_of_md": false, 00:16:36.736 "dif_pi_format": 0 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "bdev_wait_for_examine" 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "nbd", 00:16:36.736 "config": [] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "scheduler", 00:16:36.736 "config": [ 00:16:36.736 { 00:16:36.736 "method": "framework_set_scheduler", 00:16:36.736 "params": { 00:16:36.736 "name": "static" 00:16:36.736 } 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "subsystem": "nvmf", 00:16:36.736 "config": [ 00:16:36.736 { 00:16:36.736 "method": "nvmf_set_config", 00:16:36.736 "params": { 00:16:36.736 "discovery_filter": "match_any", 00:16:36.736 "admin_cmd_passthru": { 00:16:36.736 "identify_ctrlr": false 00:16:36.736 }, 00:16:36.736 "dhchap_digests": [ 00:16:36.736 "sha256", 00:16:36.736 "sha384", 00:16:36.736 "sha512" 00:16:36.736 ], 00:16:36.736 "dhchap_dhgroups": [ 00:16:36.736 "null", 00:16:36.736 "ffdhe2048", 00:16:36.736 "ffdhe3072", 00:16:36.736 "ffdhe4096", 00:16:36.736 "ffdhe6144", 00:16:36.736 "ffdhe8192" 00:16:36.736 ] 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_set_max_subsystems", 00:16:36.736 "params": { 00:16:36.736 "max_subsystems": 1024 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_set_crdt", 00:16:36.736 "params": { 00:16:36.736 "crdt1": 0, 00:16:36.736 "crdt2": 0, 00:16:36.736 "crdt3": 0 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_create_transport", 00:16:36.736 "params": { 00:16:36.736 "trtype": "TCP", 00:16:36.736 "max_queue_depth": 128, 00:16:36.736 "max_io_qpairs_per_ctrlr": 127, 00:16:36.736 "in_capsule_data_size": 4096, 00:16:36.736 "max_io_size": 131072, 00:16:36.736 "io_unit_size": 131072, 00:16:36.736 "max_aq_depth": 128, 00:16:36.736 "num_shared_buffers": 511, 00:16:36.736 "buf_cache_size": 4294967295, 00:16:36.736 "dif_insert_or_strip": false, 00:16:36.736 "zcopy": false, 00:16:36.736 "c2h_success": false, 00:16:36.736 "sock_priority": 0, 00:16:36.736 "abort_timeout_sec": 1, 00:16:36.736 "ack_timeout": 0, 00:16:36.736 "data_wr_pool_size": 0 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_create_subsystem", 00:16:36.736 "params": { 00:16:36.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.736 "allow_any_host": false, 00:16:36.736 "serial_number": "SPDK00000000000001", 00:16:36.736 "model_number": "SPDK bdev Controller", 00:16:36.736 "max_namespaces": 10, 00:16:36.736 "min_cntlid": 1, 00:16:36.736 "max_cntlid": 65519, 00:16:36.736 "ana_reporting": false 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_subsystem_add_host", 00:16:36.736 "params": { 00:16:36.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.736 "host": "nqn.2016-06.io.spdk:host1", 00:16:36.736 "psk": "key0" 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_subsystem_add_ns", 00:16:36.736 "params": { 00:16:36.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.736 "namespace": { 00:16:36.736 "nsid": 1, 00:16:36.736 "bdev_name": "malloc0", 00:16:36.736 "nguid": "950BF216F20B4F3A8A09DAE2836915D0", 00:16:36.736 "uuid": "950bf216-f20b-4f3a-8a09-dae2836915d0", 00:16:36.736 "no_auto_visible": false 00:16:36.736 } 00:16:36.736 } 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "method": "nvmf_subsystem_add_listener", 00:16:36.736 "params": { 00:16:36.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.737 "listen_address": { 00:16:36.737 "trtype": "TCP", 00:16:36.737 "adrfam": "IPv4", 00:16:36.737 "traddr": "10.0.0.3", 00:16:36.737 "trsvcid": "4420" 00:16:36.737 }, 00:16:36.737 "secure_channel": true 00:16:36.737 } 00:16:36.737 } 00:16:36.737 ] 00:16:36.737 } 00:16:36.737 ] 00:16:36.737 }' 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86252 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86252 00:16:36.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86252 ']' 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.737 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.737 [2024-11-28 11:47:06.849928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:36.737 [2024-11-28 11:47:06.850248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.996 [2024-11-28 11:47:06.973654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:36.996 [2024-11-28 11:47:06.999289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.996 [2024-11-28 11:47:07.048547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.996 [2024-11-28 11:47:07.048610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.996 [2024-11-28 11:47:07.048622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.996 [2024-11-28 11:47:07.048629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.996 [2024-11-28 11:47:07.048636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.996 [2024-11-28 11:47:07.049224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.254 [2024-11-28 11:47:07.239381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:37.254 [2024-11-28 11:47:07.334423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.254 [2024-11-28 11:47:07.366385] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:37.254 [2024-11-28 11:47:07.366674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=86292 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 86292 /var/tmp/bdevperf.sock 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86292 ']' 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:37.821 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:37.821 "subsystems": [ 00:16:37.821 { 00:16:37.821 "subsystem": "keyring", 00:16:37.822 "config": [ 00:16:37.822 { 00:16:37.822 "method": "keyring_file_add_key", 00:16:37.822 "params": { 00:16:37.822 "name": "key0", 00:16:37.822 "path": "/tmp/tmp.zzUp6hglUU" 00:16:37.822 } 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "iobuf", 00:16:37.822 "config": [ 00:16:37.822 { 00:16:37.822 "method": "iobuf_set_options", 00:16:37.822 "params": { 00:16:37.822 "small_pool_count": 8192, 00:16:37.822 "large_pool_count": 1024, 00:16:37.822 "small_bufsize": 8192, 00:16:37.822 "large_bufsize": 135168, 00:16:37.822 "enable_numa": false 00:16:37.822 } 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "sock", 00:16:37.822 "config": [ 00:16:37.822 { 00:16:37.822 "method": "sock_set_default_impl", 00:16:37.822 "params": { 00:16:37.822 "impl_name": "uring" 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "sock_impl_set_options", 00:16:37.822 "params": { 00:16:37.822 "impl_name": "ssl", 00:16:37.822 "recv_buf_size": 4096, 00:16:37.822 "send_buf_size": 4096, 00:16:37.822 "enable_recv_pipe": true, 00:16:37.822 "enable_quickack": false, 00:16:37.822 "enable_placement_id": 0, 00:16:37.822 "enable_zerocopy_send_server": true, 00:16:37.822 "enable_zerocopy_send_client": false, 00:16:37.822 "zerocopy_threshold": 0, 00:16:37.822 "tls_version": 0, 00:16:37.822 "enable_ktls": false 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "sock_impl_set_options", 00:16:37.822 "params": { 00:16:37.822 "impl_name": "posix", 00:16:37.822 "recv_buf_size": 2097152, 00:16:37.822 "send_buf_size": 2097152, 00:16:37.822 "enable_recv_pipe": true, 00:16:37.822 "enable_quickack": false, 00:16:37.822 "enable_placement_id": 0, 00:16:37.822 "enable_zerocopy_send_server": true, 00:16:37.822 "enable_zerocopy_send_client": false, 00:16:37.822 "zerocopy_threshold": 0, 00:16:37.822 "tls_version": 0, 00:16:37.822 "enable_ktls": false 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "sock_impl_set_options", 00:16:37.822 "params": { 00:16:37.822 "impl_name": "uring", 00:16:37.822 "recv_buf_size": 2097152, 00:16:37.822 "send_buf_size": 2097152, 00:16:37.822 "enable_recv_pipe": true, 00:16:37.822 "enable_quickack": false, 00:16:37.822 "enable_placement_id": 0, 00:16:37.822 "enable_zerocopy_send_server": false, 00:16:37.822 "enable_zerocopy_send_client": false, 00:16:37.822 "zerocopy_threshold": 0, 00:16:37.822 "tls_version": 0, 00:16:37.822 "enable_ktls": false 00:16:37.822 } 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "vmd", 00:16:37.822 "config": [] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "accel", 00:16:37.822 "config": [ 00:16:37.822 { 00:16:37.822 "method": "accel_set_options", 00:16:37.822 "params": { 00:16:37.822 "small_cache_size": 128, 00:16:37.822 "large_cache_size": 16, 00:16:37.822 "task_count": 2048, 00:16:37.822 "sequence_count": 2048, 00:16:37.822 "buf_count": 2048 00:16:37.822 } 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "bdev", 00:16:37.822 "config": [ 00:16:37.822 { 00:16:37.822 "method": "bdev_set_options", 00:16:37.822 "params": { 00:16:37.822 "bdev_io_pool_size": 65535, 00:16:37.822 "bdev_io_cache_size": 256, 00:16:37.822 "bdev_auto_examine": true, 00:16:37.822 "iobuf_small_cache_size": 128, 00:16:37.822 "iobuf_large_cache_size": 16 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_raid_set_options", 00:16:37.822 "params": { 00:16:37.822 "process_window_size_kb": 1024, 00:16:37.822 "process_max_bandwidth_mb_sec": 0 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_iscsi_set_options", 00:16:37.822 "params": { 00:16:37.822 "timeout_sec": 30 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_nvme_set_options", 00:16:37.822 "params": { 00:16:37.822 "action_on_timeout": "none", 00:16:37.822 "timeout_us": 0, 00:16:37.822 "timeout_admin_us": 0, 00:16:37.822 "keep_alive_timeout_ms": 10000, 00:16:37.822 "arbitration_burst": 0, 00:16:37.822 "low_priority_weight": 0, 00:16:37.822 "medium_priority_weight": 0, 00:16:37.822 "high_priority_weight": 0, 00:16:37.822 "nvme_adminq_poll_period_us": 10000, 00:16:37.822 "nvme_ioq_poll_period_us": 0, 00:16:37.822 "io_queue_requests": 512, 00:16:37.822 "delay_cmd_submit": true, 00:16:37.822 "transport_retry_count": 4, 00:16:37.822 "bdev_retry_count": 3, 00:16:37.822 "transport_ack_timeout": 0, 00:16:37.822 "ctrlr_loss_timeout_sec": 0, 00:16:37.822 "reconnect_delay_sec": 0, 00:16:37.822 "fast_io_fail_timeout_sec": 0, 00:16:37.822 "disable_auto_failback": false, 00:16:37.822 "generate_uuids": false, 00:16:37.822 "transport_tos": 0, 00:16:37.822 "nvme_error_stat": false, 00:16:37.822 "rdma_srq_size": 0, 00:16:37.822 "io_path_stat": false, 00:16:37.822 "allow_accel_sequence": false, 00:16:37.822 "rdma_max_cq_size": 0, 00:16:37.822 "rdma_cm_event_timeout_ms": 0, 00:16:37.822 "dhchap_digests": [ 00:16:37.822 "sha256", 00:16:37.822 "sha384", 00:16:37.822 "sha512" 00:16:37.822 ], 00:16:37.822 "dhchap_dhgroups": [ 00:16:37.822 "null", 00:16:37.822 "ffdhe2048", 00:16:37.822 "ffdhe3072", 00:16:37.822 "ffdhe4096", 00:16:37.822 "ffdhe6144", 00:16:37.822 "ffdhe8192" 00:16:37.822 ] 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_nvme_attach_controller", 00:16:37.822 "params": { 00:16:37.822 "name": "TLSTEST", 00:16:37.822 "trtype": "TCP", 00:16:37.822 "adrfam": "IPv4", 00:16:37.822 "traddr": "10.0.0.3", 00:16:37.822 "trsvcid": "4420", 00:16:37.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.822 "prchk_reftag": false, 00:16:37.822 "prchk_guard": false, 00:16:37.822 "ctrlr_loss_timeout_sec": 0, 00:16:37.822 "reconnect_delay_sec": 0, 00:16:37.822 "fast_io_fail_timeout_sec": 0, 00:16:37.822 "psk": "key0", 00:16:37.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.822 "hdgst": false, 00:16:37.822 "ddgst": false, 00:16:37.822 "multipath": "multipath" 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_nvme_set_hotplug", 00:16:37.822 "params": { 00:16:37.822 "period_us": 100000, 00:16:37.822 "enable": false 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "method": "bdev_wait_for_examine" 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "subsystem": "nbd", 00:16:37.822 "config": [] 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }' 00:16:38.086 [2024-11-28 11:47:07.969428] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:38.086 [2024-11-28 11:47:07.969565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86292 ] 00:16:38.086 [2024-11-28 11:47:08.098729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:38.086 [2024-11-28 11:47:08.131763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.086 [2024-11-28 11:47:08.186279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.343 [2024-11-28 11:47:08.328350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.343 [2024-11-28 11:47:08.378922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:38.909 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.909 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:38.909 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:39.167 Running I/O for 10 seconds... 00:16:41.068 3536.00 IOPS, 13.81 MiB/s [2024-11-28T11:47:12.131Z] 3584.00 IOPS, 14.00 MiB/s [2024-11-28T11:47:13.508Z] 3568.00 IOPS, 13.94 MiB/s [2024-11-28T11:47:14.445Z] 3552.00 IOPS, 13.88 MiB/s [2024-11-28T11:47:15.380Z] 3558.80 IOPS, 13.90 MiB/s [2024-11-28T11:47:16.316Z] 3542.33 IOPS, 13.84 MiB/s [2024-11-28T11:47:17.253Z] 3529.14 IOPS, 13.79 MiB/s [2024-11-28T11:47:18.190Z] 3536.12 IOPS, 13.81 MiB/s [2024-11-28T11:47:19.126Z] 3566.44 IOPS, 13.93 MiB/s [2024-11-28T11:47:19.126Z] 3620.70 IOPS, 14.14 MiB/s 00:16:49.001 Latency(us) 00:16:49.001 [2024-11-28T11:47:19.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.001 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:49.001 Verification LBA range: start 0x0 length 0x2000 00:16:49.001 TLSTESTn1 : 10.02 3627.18 14.17 0.00 0.00 35230.73 5183.30 25261.15 00:16:49.001 [2024-11-28T11:47:19.127Z] =================================================================================================================== 00:16:49.001 [2024-11-28T11:47:19.127Z] Total : 3627.18 14.17 0.00 0.00 35230.73 5183.30 25261.15 00:16:49.001 { 00:16:49.001 "results": [ 00:16:49.001 { 00:16:49.001 "job": "TLSTESTn1", 00:16:49.001 "core_mask": "0x4", 00:16:49.001 "workload": "verify", 00:16:49.001 "status": "finished", 00:16:49.001 "verify_range": { 00:16:49.001 "start": 0, 00:16:49.001 "length": 8192 00:16:49.001 }, 00:16:49.001 "queue_depth": 128, 00:16:49.001 "io_size": 4096, 00:16:49.001 "runtime": 10.016607, 00:16:49.001 "iops": 3627.1763482384804, 00:16:49.001 "mibps": 14.168657610306564, 00:16:49.001 "io_failed": 0, 00:16:49.001 "io_timeout": 0, 00:16:49.001 "avg_latency_us": 35230.73306736861, 00:16:49.001 "min_latency_us": 5183.301818181818, 00:16:49.001 "max_latency_us": 25261.14909090909 00:16:49.001 } 00:16:49.001 ], 00:16:49.001 "core_count": 1 00:16:49.001 } 00:16:49.001 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.001 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 86292 00:16:49.001 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86292 ']' 00:16:49.001 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86292 00:16:49.001 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86292 00:16:49.260 killing process with pid 86292 00:16:49.260 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.260 00:16:49.260 Latency(us) 00:16:49.260 [2024-11-28T11:47:19.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.260 [2024-11-28T11:47:19.386Z] =================================================================================================================== 00:16:49.260 [2024-11-28T11:47:19.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86292' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86292 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86292 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 86252 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86252 ']' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86252 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86252 00:16:49.260 killing process with pid 86252 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86252' 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86252 00:16:49.260 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86252 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86425 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86425 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86425 ']' 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.829 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.829 [2024-11-28 11:47:19.719415] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:49.829 [2024-11-28 11:47:19.719906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.829 [2024-11-28 11:47:19.843476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:49.829 [2024-11-28 11:47:19.875955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.829 [2024-11-28 11:47:19.935321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.829 [2024-11-28 11:47:19.935719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.829 [2024-11-28 11:47:19.935757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.829 [2024-11-28 11:47:19.935769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.829 [2024-11-28 11:47:19.935784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.829 [2024-11-28 11:47:19.936380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.088 [2024-11-28 11:47:20.013345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zzUp6hglUU 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zzUp6hglUU 00:16:50.656 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:50.915 [2024-11-28 11:47:21.028142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.174 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:51.174 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:51.432 [2024-11-28 11:47:21.516186] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.432 [2024-11-28 11:47:21.516464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.432 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:51.692 malloc0 00:16:51.950 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:52.208 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:52.467 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:52.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=86486 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 86486 /var/tmp/bdevperf.sock 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86486 ']' 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.726 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.726 [2024-11-28 11:47:22.795684] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:52.726 [2024-11-28 11:47:22.796041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86486 ] 00:16:52.985 [2024-11-28 11:47:22.923343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:52.985 [2024-11-28 11:47:22.956011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.985 [2024-11-28 11:47:23.019609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.985 [2024-11-28 11:47:23.092282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.922 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.922 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:53.922 11:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:54.181 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:54.440 [2024-11-28 11:47:24.314156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.440 nvme0n1 00:16:54.440 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:54.440 Running I/O for 1 seconds... 00:16:55.856 3584.00 IOPS, 14.00 MiB/s 00:16:55.856 Latency(us) 00:16:55.856 [2024-11-28T11:47:25.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.857 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:55.857 Verification LBA range: start 0x0 length 0x2000 00:16:55.857 nvme0n1 : 1.03 3614.22 14.12 0.00 0.00 34989.72 10426.18 24427.05 00:16:55.857 [2024-11-28T11:47:25.983Z] =================================================================================================================== 00:16:55.857 [2024-11-28T11:47:25.983Z] Total : 3614.22 14.12 0.00 0.00 34989.72 10426.18 24427.05 00:16:55.857 { 00:16:55.857 "results": [ 00:16:55.857 { 00:16:55.857 "job": "nvme0n1", 00:16:55.857 "core_mask": "0x2", 00:16:55.857 "workload": "verify", 00:16:55.857 "status": "finished", 00:16:55.857 "verify_range": { 00:16:55.857 "start": 0, 00:16:55.857 "length": 8192 00:16:55.857 }, 00:16:55.857 "queue_depth": 128, 00:16:55.857 "io_size": 4096, 00:16:55.857 "runtime": 1.027054, 00:16:55.857 "iops": 3614.2208686203453, 00:16:55.857 "mibps": 14.118050268048224, 00:16:55.857 "io_failed": 0, 00:16:55.857 "io_timeout": 0, 00:16:55.857 "avg_latency_us": 34989.71887147335, 00:16:55.857 "min_latency_us": 10426.181818181818, 00:16:55.857 "max_latency_us": 24427.054545454546 00:16:55.857 } 00:16:55.857 ], 00:16:55.857 "core_count": 1 00:16:55.857 } 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 86486 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86486 ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86486 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86486 00:16:55.857 killing process with pid 86486 00:16:55.857 Received shutdown signal, test time was about 1.000000 seconds 00:16:55.857 00:16:55.857 Latency(us) 00:16:55.857 [2024-11-28T11:47:25.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.857 [2024-11-28T11:47:25.983Z] =================================================================================================================== 00:16:55.857 [2024-11-28T11:47:25.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86486' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86486 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86486 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 86425 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86425 ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86425 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86425 00:16:55.857 killing process with pid 86425 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86425' 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86425 00:16:55.857 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86425 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86537 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86537 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86537 ']' 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.115 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.375 [2024-11-28 11:47:26.244408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:56.375 [2024-11-28 11:47:26.244749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.375 [2024-11-28 11:47:26.368060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:56.375 [2024-11-28 11:47:26.393388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.375 [2024-11-28 11:47:26.431482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.375 [2024-11-28 11:47:26.431538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.375 [2024-11-28 11:47:26.431565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.375 [2024-11-28 11:47:26.431572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.375 [2024-11-28 11:47:26.431579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.375 [2024-11-28 11:47:26.431937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.375 [2024-11-28 11:47:26.484108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.634 [2024-11-28 11:47:26.595827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.634 malloc0 00:16:56.634 [2024-11-28 11:47:26.626838] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.634 [2024-11-28 11:47:26.627106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=86562 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 86562 /var/tmp/bdevperf.sock 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86562 ']' 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.634 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.634 [2024-11-28 11:47:26.709817] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:56.634 [2024-11-28 11:47:26.710096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86562 ] 00:16:56.893 [2024-11-28 11:47:26.830145] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:56.893 [2024-11-28 11:47:26.863021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.893 [2024-11-28 11:47:26.922473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.893 [2024-11-28 11:47:26.998404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.154 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.154 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:57.154 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zzUp6hglUU 00:16:57.438 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:57.697 [2024-11-28 11:47:27.576630] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.697 nvme0n1 00:16:57.697 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.697 Running I/O for 1 seconds... 00:16:59.077 3628.00 IOPS, 14.17 MiB/s 00:16:59.077 Latency(us) 00:16:59.077 [2024-11-28T11:47:29.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.077 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:59.077 Verification LBA range: start 0x0 length 0x2000 00:16:59.077 nvme0n1 : 1.01 3707.88 14.48 0.00 0.00 34303.48 3559.80 33602.09 00:16:59.077 [2024-11-28T11:47:29.203Z] =================================================================================================================== 00:16:59.077 [2024-11-28T11:47:29.203Z] Total : 3707.88 14.48 0.00 0.00 34303.48 3559.80 33602.09 00:16:59.077 { 00:16:59.077 "results": [ 00:16:59.077 { 00:16:59.077 "job": "nvme0n1", 00:16:59.077 "core_mask": "0x2", 00:16:59.077 "workload": "verify", 00:16:59.077 "status": "finished", 00:16:59.077 "verify_range": { 00:16:59.077 "start": 0, 00:16:59.077 "length": 8192 00:16:59.077 }, 00:16:59.077 "queue_depth": 128, 00:16:59.077 "io_size": 4096, 00:16:59.077 "runtime": 1.012979, 00:16:59.077 "iops": 3707.8754840919705, 00:16:59.077 "mibps": 14.48388860973426, 00:16:59.077 "io_failed": 0, 00:16:59.077 "io_timeout": 0, 00:16:59.077 "avg_latency_us": 34303.47654952076, 00:16:59.077 "min_latency_us": 3559.796363636364, 00:16:59.077 "max_latency_us": 33602.09454545454 00:16:59.077 } 00:16:59.077 ], 00:16:59.077 "core_count": 1 00:16:59.077 } 00:16:59.077 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:59.077 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.077 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.077 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.077 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:59.077 "subsystems": [ 00:16:59.077 { 00:16:59.077 "subsystem": "keyring", 00:16:59.077 "config": [ 00:16:59.077 { 00:16:59.077 "method": "keyring_file_add_key", 00:16:59.077 "params": { 00:16:59.077 "name": "key0", 00:16:59.077 "path": "/tmp/tmp.zzUp6hglUU" 00:16:59.077 } 00:16:59.077 } 00:16:59.077 ] 00:16:59.077 }, 00:16:59.077 { 00:16:59.077 "subsystem": "iobuf", 00:16:59.077 "config": [ 00:16:59.077 { 00:16:59.077 "method": "iobuf_set_options", 00:16:59.077 "params": { 00:16:59.077 "small_pool_count": 8192, 00:16:59.077 "large_pool_count": 1024, 00:16:59.077 "small_bufsize": 8192, 00:16:59.077 "large_bufsize": 135168, 00:16:59.077 "enable_numa": false 00:16:59.077 } 00:16:59.077 } 00:16:59.077 ] 00:16:59.077 }, 00:16:59.077 { 00:16:59.077 "subsystem": "sock", 00:16:59.077 "config": [ 00:16:59.077 { 00:16:59.077 "method": "sock_set_default_impl", 00:16:59.077 "params": { 00:16:59.077 "impl_name": "uring" 00:16:59.077 } 00:16:59.077 }, 00:16:59.077 { 00:16:59.078 "method": "sock_impl_set_options", 00:16:59.078 "params": { 00:16:59.078 "impl_name": "ssl", 00:16:59.078 "recv_buf_size": 4096, 00:16:59.078 "send_buf_size": 4096, 00:16:59.078 "enable_recv_pipe": true, 00:16:59.078 "enable_quickack": false, 00:16:59.078 "enable_placement_id": 0, 00:16:59.078 "enable_zerocopy_send_server": true, 00:16:59.078 "enable_zerocopy_send_client": false, 00:16:59.078 "zerocopy_threshold": 0, 00:16:59.078 "tls_version": 0, 00:16:59.078 "enable_ktls": false 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "sock_impl_set_options", 00:16:59.078 "params": { 00:16:59.078 "impl_name": "posix", 00:16:59.078 "recv_buf_size": 2097152, 00:16:59.078 "send_buf_size": 2097152, 00:16:59.078 "enable_recv_pipe": true, 00:16:59.078 "enable_quickack": false, 00:16:59.078 "enable_placement_id": 0, 00:16:59.078 "enable_zerocopy_send_server": true, 00:16:59.078 "enable_zerocopy_send_client": false, 00:16:59.078 "zerocopy_threshold": 0, 00:16:59.078 "tls_version": 0, 00:16:59.078 "enable_ktls": false 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "sock_impl_set_options", 00:16:59.078 "params": { 00:16:59.078 "impl_name": "uring", 00:16:59.078 "recv_buf_size": 2097152, 00:16:59.078 "send_buf_size": 2097152, 00:16:59.078 "enable_recv_pipe": true, 00:16:59.078 "enable_quickack": false, 00:16:59.078 "enable_placement_id": 0, 00:16:59.078 "enable_zerocopy_send_server": false, 00:16:59.078 "enable_zerocopy_send_client": false, 00:16:59.078 "zerocopy_threshold": 0, 00:16:59.078 "tls_version": 0, 00:16:59.078 "enable_ktls": false 00:16:59.078 } 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "vmd", 00:16:59.078 "config": [] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "accel", 00:16:59.078 "config": [ 00:16:59.078 { 00:16:59.078 "method": "accel_set_options", 00:16:59.078 "params": { 00:16:59.078 "small_cache_size": 128, 00:16:59.078 "large_cache_size": 16, 00:16:59.078 "task_count": 2048, 00:16:59.078 "sequence_count": 2048, 00:16:59.078 "buf_count": 2048 00:16:59.078 } 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "bdev", 00:16:59.078 "config": [ 00:16:59.078 { 00:16:59.078 "method": "bdev_set_options", 00:16:59.078 "params": { 00:16:59.078 "bdev_io_pool_size": 65535, 00:16:59.078 "bdev_io_cache_size": 256, 00:16:59.078 "bdev_auto_examine": true, 00:16:59.078 "iobuf_small_cache_size": 128, 00:16:59.078 "iobuf_large_cache_size": 16 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_raid_set_options", 00:16:59.078 "params": { 00:16:59.078 "process_window_size_kb": 1024, 00:16:59.078 "process_max_bandwidth_mb_sec": 0 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_iscsi_set_options", 00:16:59.078 "params": { 00:16:59.078 "timeout_sec": 30 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_nvme_set_options", 00:16:59.078 "params": { 00:16:59.078 "action_on_timeout": "none", 00:16:59.078 "timeout_us": 0, 00:16:59.078 "timeout_admin_us": 0, 00:16:59.078 "keep_alive_timeout_ms": 10000, 00:16:59.078 "arbitration_burst": 0, 00:16:59.078 "low_priority_weight": 0, 00:16:59.078 "medium_priority_weight": 0, 00:16:59.078 "high_priority_weight": 0, 00:16:59.078 "nvme_adminq_poll_period_us": 10000, 00:16:59.078 "nvme_ioq_poll_period_us": 0, 00:16:59.078 "io_queue_requests": 0, 00:16:59.078 "delay_cmd_submit": true, 00:16:59.078 "transport_retry_count": 4, 00:16:59.078 "bdev_retry_count": 3, 00:16:59.078 "transport_ack_timeout": 0, 00:16:59.078 "ctrlr_loss_timeout_sec": 0, 00:16:59.078 "reconnect_delay_sec": 0, 00:16:59.078 "fast_io_fail_timeout_sec": 0, 00:16:59.078 "disable_auto_failback": false, 00:16:59.078 "generate_uuids": false, 00:16:59.078 "transport_tos": 0, 00:16:59.078 "nvme_error_stat": false, 00:16:59.078 "rdma_srq_size": 0, 00:16:59.078 "io_path_stat": false, 00:16:59.078 "allow_accel_sequence": false, 00:16:59.078 "rdma_max_cq_size": 0, 00:16:59.078 "rdma_cm_event_timeout_ms": 0, 00:16:59.078 "dhchap_digests": [ 00:16:59.078 "sha256", 00:16:59.078 "sha384", 00:16:59.078 "sha512" 00:16:59.078 ], 00:16:59.078 "dhchap_dhgroups": [ 00:16:59.078 "null", 00:16:59.078 "ffdhe2048", 00:16:59.078 "ffdhe3072", 00:16:59.078 "ffdhe4096", 00:16:59.078 "ffdhe6144", 00:16:59.078 "ffdhe8192" 00:16:59.078 ] 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_nvme_set_hotplug", 00:16:59.078 "params": { 00:16:59.078 "period_us": 100000, 00:16:59.078 "enable": false 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_malloc_create", 00:16:59.078 "params": { 00:16:59.078 "name": "malloc0", 00:16:59.078 "num_blocks": 8192, 00:16:59.078 "block_size": 4096, 00:16:59.078 "physical_block_size": 4096, 00:16:59.078 "uuid": "c44c4d2c-7101-4066-a0d8-63dd654758e2", 00:16:59.078 "optimal_io_boundary": 0, 00:16:59.078 "md_size": 0, 00:16:59.078 "dif_type": 0, 00:16:59.078 "dif_is_head_of_md": false, 00:16:59.078 "dif_pi_format": 0 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "bdev_wait_for_examine" 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "nbd", 00:16:59.078 "config": [] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "scheduler", 00:16:59.078 "config": [ 00:16:59.078 { 00:16:59.078 "method": "framework_set_scheduler", 00:16:59.078 "params": { 00:16:59.078 "name": "static" 00:16:59.078 } 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "subsystem": "nvmf", 00:16:59.078 "config": [ 00:16:59.078 { 00:16:59.078 "method": "nvmf_set_config", 00:16:59.078 "params": { 00:16:59.078 "discovery_filter": "match_any", 00:16:59.078 "admin_cmd_passthru": { 00:16:59.078 "identify_ctrlr": false 00:16:59.078 }, 00:16:59.078 "dhchap_digests": [ 00:16:59.078 "sha256", 00:16:59.078 "sha384", 00:16:59.078 "sha512" 00:16:59.078 ], 00:16:59.078 "dhchap_dhgroups": [ 00:16:59.078 "null", 00:16:59.078 "ffdhe2048", 00:16:59.078 "ffdhe3072", 00:16:59.078 "ffdhe4096", 00:16:59.078 "ffdhe6144", 00:16:59.078 "ffdhe8192" 00:16:59.078 ] 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_set_max_subsystems", 00:16:59.078 "params": { 00:16:59.078 "max_subsystems": 1024 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_set_crdt", 00:16:59.078 "params": { 00:16:59.078 "crdt1": 0, 00:16:59.078 "crdt2": 0, 00:16:59.078 "crdt3": 0 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_create_transport", 00:16:59.078 "params": { 00:16:59.078 "trtype": "TCP", 00:16:59.078 "max_queue_depth": 128, 00:16:59.078 "max_io_qpairs_per_ctrlr": 127, 00:16:59.078 "in_capsule_data_size": 4096, 00:16:59.078 "max_io_size": 131072, 00:16:59.078 "io_unit_size": 131072, 00:16:59.078 "max_aq_depth": 128, 00:16:59.078 "num_shared_buffers": 511, 00:16:59.078 "buf_cache_size": 4294967295, 00:16:59.078 "dif_insert_or_strip": false, 00:16:59.078 "zcopy": false, 00:16:59.078 "c2h_success": false, 00:16:59.078 "sock_priority": 0, 00:16:59.078 "abort_timeout_sec": 1, 00:16:59.078 "ack_timeout": 0, 00:16:59.078 "data_wr_pool_size": 0 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_create_subsystem", 00:16:59.078 "params": { 00:16:59.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.078 "allow_any_host": false, 00:16:59.078 "serial_number": "00000000000000000000", 00:16:59.078 "model_number": "SPDK bdev Controller", 00:16:59.078 "max_namespaces": 32, 00:16:59.078 "min_cntlid": 1, 00:16:59.078 "max_cntlid": 65519, 00:16:59.078 "ana_reporting": false 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_subsystem_add_host", 00:16:59.078 "params": { 00:16:59.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.078 "host": "nqn.2016-06.io.spdk:host1", 00:16:59.078 "psk": "key0" 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_subsystem_add_ns", 00:16:59.078 "params": { 00:16:59.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.078 "namespace": { 00:16:59.078 "nsid": 1, 00:16:59.078 "bdev_name": "malloc0", 00:16:59.078 "nguid": "C44C4D2C71014066A0D863DD654758E2", 00:16:59.078 "uuid": "c44c4d2c-7101-4066-a0d8-63dd654758e2", 00:16:59.078 "no_auto_visible": false 00:16:59.078 } 00:16:59.078 } 00:16:59.078 }, 00:16:59.078 { 00:16:59.078 "method": "nvmf_subsystem_add_listener", 00:16:59.078 "params": { 00:16:59.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.078 "listen_address": { 00:16:59.078 "trtype": "TCP", 00:16:59.078 "adrfam": "IPv4", 00:16:59.078 "traddr": "10.0.0.3", 00:16:59.078 "trsvcid": "4420" 00:16:59.078 }, 00:16:59.078 "secure_channel": false, 00:16:59.078 "sock_impl": "ssl" 00:16:59.078 } 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 } 00:16:59.078 ] 00:16:59.078 }' 00:16:59.079 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:59.338 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:59.339 "subsystems": [ 00:16:59.339 { 00:16:59.339 "subsystem": "keyring", 00:16:59.339 "config": [ 00:16:59.339 { 00:16:59.339 "method": "keyring_file_add_key", 00:16:59.339 "params": { 00:16:59.339 "name": "key0", 00:16:59.339 "path": "/tmp/tmp.zzUp6hglUU" 00:16:59.339 } 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "iobuf", 00:16:59.339 "config": [ 00:16:59.339 { 00:16:59.339 "method": "iobuf_set_options", 00:16:59.339 "params": { 00:16:59.339 "small_pool_count": 8192, 00:16:59.339 "large_pool_count": 1024, 00:16:59.339 "small_bufsize": 8192, 00:16:59.339 "large_bufsize": 135168, 00:16:59.339 "enable_numa": false 00:16:59.339 } 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "sock", 00:16:59.339 "config": [ 00:16:59.339 { 00:16:59.339 "method": "sock_set_default_impl", 00:16:59.339 "params": { 00:16:59.339 "impl_name": "uring" 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "sock_impl_set_options", 00:16:59.339 "params": { 00:16:59.339 "impl_name": "ssl", 00:16:59.339 "recv_buf_size": 4096, 00:16:59.339 "send_buf_size": 4096, 00:16:59.339 "enable_recv_pipe": true, 00:16:59.339 "enable_quickack": false, 00:16:59.339 "enable_placement_id": 0, 00:16:59.339 "enable_zerocopy_send_server": true, 00:16:59.339 "enable_zerocopy_send_client": false, 00:16:59.339 "zerocopy_threshold": 0, 00:16:59.339 "tls_version": 0, 00:16:59.339 "enable_ktls": false 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "sock_impl_set_options", 00:16:59.339 "params": { 00:16:59.339 "impl_name": "posix", 00:16:59.339 "recv_buf_size": 2097152, 00:16:59.339 "send_buf_size": 2097152, 00:16:59.339 "enable_recv_pipe": true, 00:16:59.339 "enable_quickack": false, 00:16:59.339 "enable_placement_id": 0, 00:16:59.339 "enable_zerocopy_send_server": true, 00:16:59.339 "enable_zerocopy_send_client": false, 00:16:59.339 "zerocopy_threshold": 0, 00:16:59.339 "tls_version": 0, 00:16:59.339 "enable_ktls": false 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "sock_impl_set_options", 00:16:59.339 "params": { 00:16:59.339 "impl_name": "uring", 00:16:59.339 "recv_buf_size": 2097152, 00:16:59.339 "send_buf_size": 2097152, 00:16:59.339 "enable_recv_pipe": true, 00:16:59.339 "enable_quickack": false, 00:16:59.339 "enable_placement_id": 0, 00:16:59.339 "enable_zerocopy_send_server": false, 00:16:59.339 "enable_zerocopy_send_client": false, 00:16:59.339 "zerocopy_threshold": 0, 00:16:59.339 "tls_version": 0, 00:16:59.339 "enable_ktls": false 00:16:59.339 } 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "vmd", 00:16:59.339 "config": [] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "accel", 00:16:59.339 "config": [ 00:16:59.339 { 00:16:59.339 "method": "accel_set_options", 00:16:59.339 "params": { 00:16:59.339 "small_cache_size": 128, 00:16:59.339 "large_cache_size": 16, 00:16:59.339 "task_count": 2048, 00:16:59.339 "sequence_count": 2048, 00:16:59.339 "buf_count": 2048 00:16:59.339 } 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "bdev", 00:16:59.339 "config": [ 00:16:59.339 { 00:16:59.339 "method": "bdev_set_options", 00:16:59.339 "params": { 00:16:59.339 "bdev_io_pool_size": 65535, 00:16:59.339 "bdev_io_cache_size": 256, 00:16:59.339 "bdev_auto_examine": true, 00:16:59.339 "iobuf_small_cache_size": 128, 00:16:59.339 "iobuf_large_cache_size": 16 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_raid_set_options", 00:16:59.339 "params": { 00:16:59.339 "process_window_size_kb": 1024, 00:16:59.339 "process_max_bandwidth_mb_sec": 0 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_iscsi_set_options", 00:16:59.339 "params": { 00:16:59.339 "timeout_sec": 30 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_nvme_set_options", 00:16:59.339 "params": { 00:16:59.339 "action_on_timeout": "none", 00:16:59.339 "timeout_us": 0, 00:16:59.339 "timeout_admin_us": 0, 00:16:59.339 "keep_alive_timeout_ms": 10000, 00:16:59.339 "arbitration_burst": 0, 00:16:59.339 "low_priority_weight": 0, 00:16:59.339 "medium_priority_weight": 0, 00:16:59.339 "high_priority_weight": 0, 00:16:59.339 "nvme_adminq_poll_period_us": 10000, 00:16:59.339 "nvme_ioq_poll_period_us": 0, 00:16:59.339 "io_queue_requests": 512, 00:16:59.339 "delay_cmd_submit": true, 00:16:59.339 "transport_retry_count": 4, 00:16:59.339 "bdev_retry_count": 3, 00:16:59.339 "transport_ack_timeout": 0, 00:16:59.339 "ctrlr_loss_timeout_sec": 0, 00:16:59.339 "reconnect_delay_sec": 0, 00:16:59.339 "fast_io_fail_timeout_sec": 0, 00:16:59.339 "disable_auto_failback": false, 00:16:59.339 "generate_uuids": false, 00:16:59.339 "transport_tos": 0, 00:16:59.339 "nvme_error_stat": false, 00:16:59.339 "rdma_srq_size": 0, 00:16:59.339 "io_path_stat": false, 00:16:59.339 "allow_accel_sequence": false, 00:16:59.339 "rdma_max_cq_size": 0, 00:16:59.339 "rdma_cm_event_timeout_ms": 0, 00:16:59.339 "dhchap_digests": [ 00:16:59.339 "sha256", 00:16:59.339 "sha384", 00:16:59.339 "sha512" 00:16:59.339 ], 00:16:59.339 "dhchap_dhgroups": [ 00:16:59.339 "null", 00:16:59.339 "ffdhe2048", 00:16:59.339 "ffdhe3072", 00:16:59.339 "ffdhe4096", 00:16:59.339 "ffdhe6144", 00:16:59.339 "ffdhe8192" 00:16:59.339 ] 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_nvme_attach_controller", 00:16:59.339 "params": { 00:16:59.339 "name": "nvme0", 00:16:59.339 "trtype": "TCP", 00:16:59.339 "adrfam": "IPv4", 00:16:59.339 "traddr": "10.0.0.3", 00:16:59.339 "trsvcid": "4420", 00:16:59.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.339 "prchk_reftag": false, 00:16:59.339 "prchk_guard": false, 00:16:59.339 "ctrlr_loss_timeout_sec": 0, 00:16:59.339 "reconnect_delay_sec": 0, 00:16:59.339 "fast_io_fail_timeout_sec": 0, 00:16:59.339 "psk": "key0", 00:16:59.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.339 "hdgst": false, 00:16:59.339 "ddgst": false, 00:16:59.339 "multipath": "multipath" 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_nvme_set_hotplug", 00:16:59.339 "params": { 00:16:59.339 "period_us": 100000, 00:16:59.339 "enable": false 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_enable_histogram", 00:16:59.339 "params": { 00:16:59.339 "name": "nvme0n1", 00:16:59.339 "enable": true 00:16:59.339 } 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "method": "bdev_wait_for_examine" 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }, 00:16:59.339 { 00:16:59.339 "subsystem": "nbd", 00:16:59.339 "config": [] 00:16:59.339 } 00:16:59.339 ] 00:16:59.339 }' 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 86562 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86562 ']' 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86562 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.339 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86562 00:16:59.339 killing process with pid 86562 00:16:59.339 Received shutdown signal, test time was about 1.000000 seconds 00:16:59.340 00:16:59.340 Latency(us) 00:16:59.340 [2024-11-28T11:47:29.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.340 [2024-11-28T11:47:29.466Z] =================================================================================================================== 00:16:59.340 [2024-11-28T11:47:29.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.340 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.340 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.340 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86562' 00:16:59.340 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86562 00:16:59.340 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86562 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 86537 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86537 ']' 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86537 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86537 00:16:59.599 killing process with pid 86537 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86537' 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86537 00:16:59.599 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86537 00:16:59.858 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:59.858 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.858 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:59.858 "subsystems": [ 00:16:59.858 { 00:16:59.858 "subsystem": "keyring", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "keyring_file_add_key", 00:16:59.858 "params": { 00:16:59.858 "name": "key0", 00:16:59.858 "path": "/tmp/tmp.zzUp6hglUU" 00:16:59.858 } 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "iobuf", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "iobuf_set_options", 00:16:59.858 "params": { 00:16:59.858 "small_pool_count": 8192, 00:16:59.858 "large_pool_count": 1024, 00:16:59.858 "small_bufsize": 8192, 00:16:59.858 "large_bufsize": 135168, 00:16:59.858 "enable_numa": false 00:16:59.858 } 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "sock", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "sock_set_default_impl", 00:16:59.858 "params": { 00:16:59.858 "impl_name": "uring" 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "sock_impl_set_options", 00:16:59.858 "params": { 00:16:59.858 "impl_name": "ssl", 00:16:59.858 "recv_buf_size": 4096, 00:16:59.858 "send_buf_size": 4096, 00:16:59.858 "enable_recv_pipe": true, 00:16:59.858 "enable_quickack": false, 00:16:59.858 "enable_placement_id": 0, 00:16:59.858 "enable_zerocopy_send_server": true, 00:16:59.858 "enable_zerocopy_send_client": false, 00:16:59.858 "zerocopy_threshold": 0, 00:16:59.858 "tls_version": 0, 00:16:59.858 "enable_ktls": false 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "sock_impl_set_options", 00:16:59.858 "params": { 00:16:59.858 "impl_name": "posix", 00:16:59.858 "recv_buf_size": 2097152, 00:16:59.858 "send_buf_size": 2097152, 00:16:59.858 "enable_recv_pipe": true, 00:16:59.858 "enable_quickack": false, 00:16:59.858 "enable_placement_id": 0, 00:16:59.858 "enable_zerocopy_send_server": true, 00:16:59.858 "enable_zerocopy_send_client": false, 00:16:59.858 "zerocopy_threshold": 0, 00:16:59.858 "tls_version": 0, 00:16:59.858 "enable_ktls": false 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "sock_impl_set_options", 00:16:59.858 "params": { 00:16:59.858 "impl_name": "uring", 00:16:59.858 "recv_buf_size": 2097152, 00:16:59.858 "send_buf_size": 2097152, 00:16:59.858 "enable_recv_pipe": true, 00:16:59.858 "enable_quickack": false, 00:16:59.858 "enable_placement_id": 0, 00:16:59.858 "enable_zerocopy_send_server": false, 00:16:59.858 "enable_zerocopy_send_client": false, 00:16:59.858 "zerocopy_threshold": 0, 00:16:59.858 "tls_version": 0, 00:16:59.858 "enable_ktls": false 00:16:59.858 } 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "vmd", 00:16:59.858 "config": [] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "accel", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "accel_set_options", 00:16:59.858 "params": { 00:16:59.858 "small_cache_size": 128, 00:16:59.858 "large_cache_size": 16, 00:16:59.858 "task_count": 2048, 00:16:59.858 "sequence_count": 2048, 00:16:59.858 "buf_count": 2048 00:16:59.858 } 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "bdev", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "bdev_set_options", 00:16:59.858 "params": { 00:16:59.858 "bdev_io_pool_size": 65535, 00:16:59.858 "bdev_io_cache_size": 256, 00:16:59.858 "bdev_auto_examine": true, 00:16:59.858 "iobuf_small_cache_size": 128, 00:16:59.858 "iobuf_large_cache_size": 16 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_raid_set_options", 00:16:59.858 "params": { 00:16:59.858 "process_window_size_kb": 1024, 00:16:59.858 "process_max_bandwidth_mb_sec": 0 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_iscsi_set_options", 00:16:59.858 "params": { 00:16:59.858 "timeout_sec": 30 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_nvme_set_options", 00:16:59.858 "params": { 00:16:59.858 "action_on_timeout": "none", 00:16:59.858 "timeout_us": 0, 00:16:59.858 "timeout_admin_us": 0, 00:16:59.858 "keep_alive_timeout_ms": 10000, 00:16:59.858 "arbitration_burst": 0, 00:16:59.858 "low_priority_weight": 0, 00:16:59.858 "medium_priority_weight": 0, 00:16:59.858 "high_priority_weight": 0, 00:16:59.858 "nvme_adminq_poll_period_us": 10000, 00:16:59.858 "nvme_ioq_poll_period_us": 0, 00:16:59.858 "io_queue_requests": 0, 00:16:59.858 "delay_cmd_submit": true, 00:16:59.858 "transport_retry_count": 4, 00:16:59.858 "bdev_retry_count": 3, 00:16:59.858 "transport_ack_timeout": 0, 00:16:59.858 "ctrlr_loss_timeout_sec": 0, 00:16:59.858 "reconnect_delay_sec": 0, 00:16:59.858 "fast_io_fail_timeout_sec": 0, 00:16:59.858 "disable_auto_failback": false, 00:16:59.858 "generate_uuids": false, 00:16:59.858 "transport_tos": 0, 00:16:59.858 "nvme_error_stat": false, 00:16:59.858 "rdma_srq_size": 0, 00:16:59.858 "io_path_stat": false, 00:16:59.858 "allow_accel_sequence": false, 00:16:59.858 "rdma_max_cq_size": 0, 00:16:59.858 "rdma_cm_event_timeout_ms": 0, 00:16:59.858 "dhchap_digests": [ 00:16:59.858 "sha256", 00:16:59.858 "sha384", 00:16:59.858 "sha512" 00:16:59.858 ], 00:16:59.858 "dhchap_dhgroups": [ 00:16:59.858 "null", 00:16:59.858 "ffdhe2048", 00:16:59.858 "ffdhe3072", 00:16:59.858 "ffdhe4096", 00:16:59.858 "ffdhe6144", 00:16:59.858 "ffdhe8192" 00:16:59.858 ] 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_nvme_set_hotplug", 00:16:59.858 "params": { 00:16:59.858 "period_us": 100000, 00:16:59.858 "enable": false 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_malloc_create", 00:16:59.858 "params": { 00:16:59.858 "name": "malloc0", 00:16:59.858 "num_blocks": 8192, 00:16:59.858 "block_size": 4096, 00:16:59.858 "physical_block_size": 4096, 00:16:59.858 "uuid": "c44c4d2c-7101-4066-a0d8-63dd654758e2", 00:16:59.858 "optimal_io_boundary": 0, 00:16:59.858 "md_size": 0, 00:16:59.858 "dif_type": 0, 00:16:59.858 "dif_is_head_of_md": false, 00:16:59.858 "dif_pi_format": 0 00:16:59.858 } 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "method": "bdev_wait_for_examine" 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "nbd", 00:16:59.858 "config": [] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "scheduler", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "framework_set_scheduler", 00:16:59.858 "params": { 00:16:59.858 "name": "static" 00:16:59.858 } 00:16:59.858 } 00:16:59.858 ] 00:16:59.858 }, 00:16:59.858 { 00:16:59.858 "subsystem": "nvmf", 00:16:59.858 "config": [ 00:16:59.858 { 00:16:59.858 "method": "nvmf_set_config", 00:16:59.858 "params": { 00:16:59.858 "discovery_filter": "match_any", 00:16:59.858 "admin_cmd_passthru": { 00:16:59.858 "identify_ctrlr": false 00:16:59.858 }, 00:16:59.858 "dhchap_digests": [ 00:16:59.858 "sha256", 00:16:59.858 "sha384", 00:16:59.859 "sha512" 00:16:59.859 ], 00:16:59.859 "dhchap_dhgroups": [ 00:16:59.859 "null", 00:16:59.859 "ffdhe2048", 00:16:59.859 "ffdhe3072", 00:16:59.859 "ffdhe4096", 00:16:59.859 "ffdhe6144", 00:16:59.859 "ffdhe8192" 00:16:59.859 ] 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_set_max_subsystems", 00:16:59.859 "params": { 00:16:59.859 "max_subsystems": 1024 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_set_crdt", 00:16:59.859 "params": { 00:16:59.859 "crdt1": 0, 00:16:59.859 "crdt2": 0, 00:16:59.859 "crdt3": 0 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_create_transport", 00:16:59.859 "params": { 00:16:59.859 "trtype": "TCP", 00:16:59.859 "max_queue_depth": 128, 00:16:59.859 "max_io_qpairs_per_ctrlr": 127, 00:16:59.859 "in_capsule_data_size": 4096, 00:16:59.859 "max_io_size": 131072, 00:16:59.859 "io_unit_size": 131072, 00:16:59.859 "max_aq_depth": 128, 00:16:59.859 "num_shared_buffers": 511, 00:16:59.859 "buf_cache_size": 4294967295, 00:16:59.859 "dif_insert_or_strip": false, 00:16:59.859 "zcopy": false, 00:16:59.859 "c2h_success": false, 00:16:59.859 "sock_priority": 0, 00:16:59.859 "abort_timeout_sec": 1, 00:16:59.859 "ack_timeout": 0, 00:16:59.859 "data_wr_pool_size": 0 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_create_subsystem", 00:16:59.859 "params": { 00:16:59.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.859 "allow_any_host": false, 00:16:59.859 "serial_number": "00000000000000000000", 00:16:59.859 "model_number": "SPDK bdev Controller", 00:16:59.859 "max_namespaces": 32, 00:16:59.859 "min_cntlid": 1, 00:16:59.859 "max_cntlid": 65519, 00:16:59.859 "ana_reporting": false 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_subsystem_add_host", 00:16:59.859 "params": { 00:16:59.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.859 "host": "nqn.2016-06.io.spdk:host1", 00:16:59.859 "psk": "key0" 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_subsystem_add_ns", 00:16:59.859 "params": { 00:16:59.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.859 "namespace": { 00:16:59.859 "nsid": 1, 00:16:59.859 "bdev_name": "malloc0", 00:16:59.859 "nguid": "C44C4D2C71014066A0D863DD654758E2", 00:16:59.859 "uuid": "c44c4d2c-7101-4066-a0d8-63dd654758e2", 00:16:59.859 "no_auto_visible": false 00:16:59.859 } 00:16:59.859 } 00:16:59.859 }, 00:16:59.859 { 00:16:59.859 "method": "nvmf_subsystem_add_listener", 00:16:59.859 "params": { 00:16:59.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.859 "listen_address": { 00:16:59.859 "trtype": "TCP", 00:16:59.859 "adrfam": "IPv4", 00:16:59.859 "traddr": "10.0.0.3", 00:16:59.859 "trsvcid": "4420" 00:16:59.859 }, 00:16:59.859 "secure_channel": false, 00:16:59.859 "sock_impl": "ssl" 00:16:59.859 } 00:16:59.859 } 00:16:59.859 ] 00:16:59.859 } 00:16:59.859 ] 00:16:59.859 }' 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86615 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86615 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86615 ']' 00:16:59.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.859 11:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.859 [2024-11-28 11:47:29.914909] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:59.859 [2024-11-28 11:47:29.915280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.118 [2024-11-28 11:47:30.043493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.118 [2024-11-28 11:47:30.061714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.118 [2024-11-28 11:47:30.119296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.118 [2024-11-28 11:47:30.119710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.118 [2024-11-28 11:47:30.119754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.118 [2024-11-28 11:47:30.119772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.118 [2024-11-28 11:47:30.119781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.118 [2024-11-28 11:47:30.120395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.376 [2024-11-28 11:47:30.305515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.376 [2024-11-28 11:47:30.397976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.376 [2024-11-28 11:47:30.429928] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.376 [2024-11-28 11:47:30.430180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=86647 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 86647 /var/tmp/bdevperf.sock 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86647 ']' 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.943 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:00.943 "subsystems": [ 00:17:00.943 { 00:17:00.943 "subsystem": "keyring", 00:17:00.943 "config": [ 00:17:00.943 { 00:17:00.943 "method": "keyring_file_add_key", 00:17:00.943 "params": { 00:17:00.943 "name": "key0", 00:17:00.943 "path": "/tmp/tmp.zzUp6hglUU" 00:17:00.943 } 00:17:00.943 } 00:17:00.943 ] 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "subsystem": "iobuf", 00:17:00.943 "config": [ 00:17:00.943 { 00:17:00.943 "method": "iobuf_set_options", 00:17:00.943 "params": { 00:17:00.943 "small_pool_count": 8192, 00:17:00.943 "large_pool_count": 1024, 00:17:00.943 "small_bufsize": 8192, 00:17:00.943 "large_bufsize": 135168, 00:17:00.943 "enable_numa": false 00:17:00.943 } 00:17:00.943 } 00:17:00.943 ] 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "subsystem": "sock", 00:17:00.943 "config": [ 00:17:00.943 { 00:17:00.943 "method": "sock_set_default_impl", 00:17:00.943 "params": { 00:17:00.943 "impl_name": "uring" 00:17:00.943 } 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "method": "sock_impl_set_options", 00:17:00.943 "params": { 00:17:00.943 "impl_name": "ssl", 00:17:00.943 "recv_buf_size": 4096, 00:17:00.943 "send_buf_size": 4096, 00:17:00.943 "enable_recv_pipe": true, 00:17:00.943 "enable_quickack": false, 00:17:00.943 "enable_placement_id": 0, 00:17:00.943 "enable_zerocopy_send_server": true, 00:17:00.943 "enable_zerocopy_send_client": false, 00:17:00.943 "zerocopy_threshold": 0, 00:17:00.943 "tls_version": 0, 00:17:00.943 "enable_ktls": false 00:17:00.943 } 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "method": "sock_impl_set_options", 00:17:00.943 "params": { 00:17:00.943 "impl_name": "posix", 00:17:00.943 "recv_buf_size": 2097152, 00:17:00.943 "send_buf_size": 2097152, 00:17:00.943 "enable_recv_pipe": true, 00:17:00.943 "enable_quickack": false, 00:17:00.943 "enable_placement_id": 0, 00:17:00.943 "enable_zerocopy_send_server": true, 00:17:00.943 "enable_zerocopy_send_client": false, 00:17:00.943 "zerocopy_threshold": 0, 00:17:00.943 "tls_version": 0, 00:17:00.943 "enable_ktls": false 00:17:00.943 } 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "method": "sock_impl_set_options", 00:17:00.943 "params": { 00:17:00.943 "impl_name": "uring", 00:17:00.943 "recv_buf_size": 2097152, 00:17:00.943 "send_buf_size": 2097152, 00:17:00.943 "enable_recv_pipe": true, 00:17:00.943 "enable_quickack": false, 00:17:00.943 "enable_placement_id": 0, 00:17:00.943 "enable_zerocopy_send_server": false, 00:17:00.943 "enable_zerocopy_send_client": false, 00:17:00.943 "zerocopy_threshold": 0, 00:17:00.943 "tls_version": 0, 00:17:00.943 "enable_ktls": false 00:17:00.943 } 00:17:00.943 } 00:17:00.943 ] 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "subsystem": "vmd", 00:17:00.943 "config": [] 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "subsystem": "accel", 00:17:00.943 "config": [ 00:17:00.943 { 00:17:00.943 "method": "accel_set_options", 00:17:00.943 "params": { 00:17:00.943 "small_cache_size": 128, 00:17:00.943 "large_cache_size": 16, 00:17:00.943 "task_count": 2048, 00:17:00.943 "sequence_count": 2048, 00:17:00.943 "buf_count": 2048 00:17:00.943 } 00:17:00.943 } 00:17:00.943 ] 00:17:00.943 }, 00:17:00.943 { 00:17:00.943 "subsystem": "bdev", 00:17:00.943 "config": [ 00:17:00.943 { 00:17:00.943 "method": "bdev_set_options", 00:17:00.943 "params": { 00:17:00.943 "bdev_io_pool_size": 65535, 00:17:00.944 "bdev_io_cache_size": 256, 00:17:00.944 "bdev_auto_examine": true, 00:17:00.944 "iobuf_small_cache_size": 128, 00:17:00.944 "iobuf_large_cache_size": 16 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_raid_set_options", 00:17:00.944 "params": { 00:17:00.944 "process_window_size_kb": 1024, 00:17:00.944 "process_max_bandwidth_mb_sec": 0 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_iscsi_set_options", 00:17:00.944 "params": { 00:17:00.944 "timeout_sec": 30 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_nvme_set_options", 00:17:00.944 "params": { 00:17:00.944 "action_on_timeout": "none", 00:17:00.944 "timeout_us": 0, 00:17:00.944 "timeout_admin_us": 0, 00:17:00.944 "keep_alive_timeout_ms": 10000, 00:17:00.944 "arbitration_burst": 0, 00:17:00.944 "low_priority_weight": 0, 00:17:00.944 "medium_priority_weight": 0, 00:17:00.944 "high_priority_weight": 0, 00:17:00.944 "nvme_adminq_poll_period_us": 10000, 00:17:00.944 "nvme_ioq_poll_period_us": 0, 00:17:00.944 "io_queue_requests": 512, 00:17:00.944 "delay_cmd_submit": true, 00:17:00.944 "transport_retry_count": 4, 00:17:00.944 "bdev_retry_count": 3, 00:17:00.944 "transport_ack_timeout": 0, 00:17:00.944 "ctrlr_loss_timeout_sec": 0, 00:17:00.944 "reconnect_delay_sec": 0, 00:17:00.944 "fast_io_fail_timeout_sec": 0, 00:17:00.944 "disable_auto_failback": false, 00:17:00.944 "generate_uuids": false, 00:17:00.944 "transport_tos": 0, 00:17:00.944 "nvme_error_stat": false, 00:17:00.944 "rdma_srq_size": 0, 00:17:00.944 "io_path_stat": false, 00:17:00.944 "allow_accel_sequence": false, 00:17:00.944 "rdma_max_cq_size": 0, 00:17:00.944 "rdma_cm_event_timeout_ms": 0, 00:17:00.944 "dhchap_digests": [ 00:17:00.944 "sha256", 00:17:00.944 "sha384", 00:17:00.944 "sha512" 00:17:00.944 ], 00:17:00.944 "dhchap_dhgroups": [ 00:17:00.944 "null", 00:17:00.944 "ffdhe2048", 00:17:00.944 "ffdhe3072", 00:17:00.944 "ffdhe4096", 00:17:00.944 "ffdhe6144", 00:17:00.944 "ffdhe8192" 00:17:00.944 ] 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_nvme_attach_controller", 00:17:00.944 "params": { 00:17:00.944 "name": "nvme0", 00:17:00.944 "trtype": "TCP", 00:17:00.944 "adrfam": "IPv4", 00:17:00.944 "traddr": "10.0.0.3", 00:17:00.944 "trsvcid": "4420", 00:17:00.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.944 "prchk_reftag": false, 00:17:00.944 "prchk_guard": false, 00:17:00.944 "ctrlr_loss_timeout_sec": 0, 00:17:00.944 "reconnect_delay_sec": 0, 00:17:00.944 "fast_io_fail_timeout_sec": 0, 00:17:00.944 "psk": "key0", 00:17:00.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.944 "hdgst": false, 00:17:00.944 "ddgst": false, 00:17:00.944 "multipath": "multipath" 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_nvme_set_hotplug", 00:17:00.944 "params": { 00:17:00.944 "period_us": 100000, 00:17:00.944 "enable": false 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_enable_histogram", 00:17:00.944 "params": { 00:17:00.944 "name": "nvme0n1", 00:17:00.944 "enable": true 00:17:00.944 } 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "method": "bdev_wait_for_examine" 00:17:00.944 } 00:17:00.944 ] 00:17:00.944 }, 00:17:00.944 { 00:17:00.944 "subsystem": "nbd", 00:17:00.944 "config": [] 00:17:00.944 } 00:17:00.944 ] 00:17:00.944 }' 00:17:00.944 [2024-11-28 11:47:31.035166] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:00.944 [2024-11-28 11:47:31.035458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86647 ] 00:17:01.202 [2024-11-28 11:47:31.156086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:01.202 [2024-11-28 11:47:31.179932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.202 [2024-11-28 11:47:31.238636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.461 [2024-11-28 11:47:31.389948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:01.461 [2024-11-28 11:47:31.448533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.027 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.027 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:02.027 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:02.027 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:02.285 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.285 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:02.544 Running I/O for 1 seconds... 00:17:03.480 3980.00 IOPS, 15.55 MiB/s 00:17:03.480 Latency(us) 00:17:03.480 [2024-11-28T11:47:33.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.480 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:03.480 Verification LBA range: start 0x0 length 0x2000 00:17:03.480 nvme0n1 : 1.02 4029.93 15.74 0.00 0.00 31428.32 8162.21 28835.84 00:17:03.480 [2024-11-28T11:47:33.606Z] =================================================================================================================== 00:17:03.480 [2024-11-28T11:47:33.606Z] Total : 4029.93 15.74 0.00 0.00 31428.32 8162.21 28835.84 00:17:03.480 { 00:17:03.480 "results": [ 00:17:03.480 { 00:17:03.480 "job": "nvme0n1", 00:17:03.480 "core_mask": "0x2", 00:17:03.480 "workload": "verify", 00:17:03.480 "status": "finished", 00:17:03.480 "verify_range": { 00:17:03.480 "start": 0, 00:17:03.480 "length": 8192 00:17:03.480 }, 00:17:03.480 "queue_depth": 128, 00:17:03.480 "io_size": 4096, 00:17:03.480 "runtime": 1.019373, 00:17:03.480 "iops": 4029.9282009627486, 00:17:03.480 "mibps": 15.741907035010737, 00:17:03.480 "io_failed": 0, 00:17:03.480 "io_timeout": 0, 00:17:03.480 "avg_latency_us": 31428.324829600777, 00:17:03.480 "min_latency_us": 8162.210909090909, 00:17:03.480 "max_latency_us": 28835.84 00:17:03.480 } 00:17:03.480 ], 00:17:03.480 "core_count": 1 00:17:03.480 } 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:03.481 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:03.481 nvmf_trace.0 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 86647 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86647 ']' 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86647 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86647 00:17:03.762 killing process with pid 86647 00:17:03.762 Received shutdown signal, test time was about 1.000000 seconds 00:17:03.762 00:17:03.762 Latency(us) 00:17:03.762 [2024-11-28T11:47:33.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.762 [2024-11-28T11:47:33.888Z] =================================================================================================================== 00:17:03.762 [2024-11-28T11:47:33.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86647' 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86647 00:17:03.762 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86647 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.020 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.020 rmmod nvme_tcp 00:17:04.020 rmmod nvme_fabrics 00:17:04.020 rmmod nvme_keyring 00:17:04.020 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.020 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:04.020 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:04.020 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 86615 ']' 00:17:04.020 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 86615 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86615 ']' 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86615 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86615 00:17:04.021 killing process with pid 86615 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86615' 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86615 00:17:04.021 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86615 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:04.279 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.l1qYNNYfkl /tmp/tmp.jUy0pOPXYV /tmp/tmp.zzUp6hglUU 00:17:04.537 00:17:04.537 real 1m27.814s 00:17:04.537 user 2m19.248s 00:17:04.537 sys 0m29.659s 00:17:04.537 ************************************ 00:17:04.537 END TEST nvmf_tls 00:17:04.537 ************************************ 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.537 11:47:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:04.797 ************************************ 00:17:04.797 START TEST nvmf_fips 00:17:04.797 ************************************ 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:04.797 * Looking for test storage... 00:17:04.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.797 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.798 --rc genhtml_branch_coverage=1 00:17:04.798 --rc genhtml_function_coverage=1 00:17:04.798 --rc genhtml_legend=1 00:17:04.798 --rc geninfo_all_blocks=1 00:17:04.798 --rc geninfo_unexecuted_blocks=1 00:17:04.798 00:17:04.798 ' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.798 --rc genhtml_branch_coverage=1 00:17:04.798 --rc genhtml_function_coverage=1 00:17:04.798 --rc genhtml_legend=1 00:17:04.798 --rc geninfo_all_blocks=1 00:17:04.798 --rc geninfo_unexecuted_blocks=1 00:17:04.798 00:17:04.798 ' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.798 --rc genhtml_branch_coverage=1 00:17:04.798 --rc genhtml_function_coverage=1 00:17:04.798 --rc genhtml_legend=1 00:17:04.798 --rc geninfo_all_blocks=1 00:17:04.798 --rc geninfo_unexecuted_blocks=1 00:17:04.798 00:17:04.798 ' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.798 --rc genhtml_branch_coverage=1 00:17:04.798 --rc genhtml_function_coverage=1 00:17:04.798 --rc genhtml_legend=1 00:17:04.798 --rc geninfo_all_blocks=1 00:17:04.798 --rc geninfo_unexecuted_blocks=1 00:17:04.798 00:17:04.798 ' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.798 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.798 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:04.799 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:05.062 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:05.062 Error setting digest 00:17:05.062 40723781F47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:05.062 40723781F47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.062 Cannot find device "nvmf_init_br" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.062 Cannot find device "nvmf_init_br2" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.062 Cannot find device "nvmf_tgt_br" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.062 Cannot find device "nvmf_tgt_br2" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.062 Cannot find device "nvmf_init_br" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.062 Cannot find device "nvmf_init_br2" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.062 Cannot find device "nvmf_tgt_br" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.062 Cannot find device "nvmf_tgt_br2" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.062 Cannot find device "nvmf_br" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.062 Cannot find device "nvmf_init_if" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.062 Cannot find device "nvmf_init_if2" 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:05.062 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:05.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:05.322 00:17:05.322 --- 10.0.0.3 ping statistics --- 00:17:05.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.322 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:05.322 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:05.322 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:17:05.322 00:17:05.322 --- 10.0.0.4 ping statistics --- 00:17:05.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.322 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:05.322 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:05.586 00:17:05.586 --- 10.0.0.1 ping statistics --- 00:17:05.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.586 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:05.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:05.586 00:17:05.586 --- 10.0.0.2 ping statistics --- 00:17:05.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.586 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86969 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86969 00:17:05.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86969 ']' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.586 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:05.586 [2024-11-28 11:47:35.582325] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:05.586 [2024-11-28 11:47:35.582438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.845 [2024-11-28 11:47:35.711000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.845 [2024-11-28 11:47:35.735659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.845 [2024-11-28 11:47:35.799483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.845 [2024-11-28 11:47:35.799548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.845 [2024-11-28 11:47:35.799559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.845 [2024-11-28 11:47:35.799567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.845 [2024-11-28 11:47:35.799574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.845 [2024-11-28 11:47:35.800010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.845 [2024-11-28 11:47:35.875541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.779 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.779 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:06.779 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.779 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.779 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.qsy 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.qsy 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.qsy 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.qsy 00:17:06.780 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.038 [2024-11-28 11:47:36.958043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.038 [2024-11-28 11:47:36.973986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.038 [2024-11-28 11:47:36.974529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:07.038 malloc0 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=87011 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 87011 /var/tmp/bdevperf.sock 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 87011 ']' 00:17:07.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.038 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.038 [2024-11-28 11:47:37.140709] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:07.038 [2024-11-28 11:47:37.141205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87011 ] 00:17:07.297 [2024-11-28 11:47:37.269180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:07.297 [2024-11-28 11:47:37.301581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.297 [2024-11-28 11:47:37.359423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.556 [2024-11-28 11:47:37.426744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.123 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.123 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:08.123 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.qsy 00:17:08.383 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:08.641 [2024-11-28 11:47:38.552293] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.641 TLSTESTn1 00:17:08.641 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:08.641 Running I/O for 10 seconds... 00:17:10.955 4045.00 IOPS, 15.80 MiB/s [2024-11-28T11:47:42.018Z] 4083.50 IOPS, 15.95 MiB/s [2024-11-28T11:47:42.989Z] 4086.33 IOPS, 15.96 MiB/s [2024-11-28T11:47:43.925Z] 4091.75 IOPS, 15.98 MiB/s [2024-11-28T11:47:44.863Z] 4096.80 IOPS, 16.00 MiB/s [2024-11-28T11:47:45.799Z] 4077.83 IOPS, 15.93 MiB/s [2024-11-28T11:47:46.850Z] 4086.43 IOPS, 15.96 MiB/s [2024-11-28T11:47:47.794Z] 4090.12 IOPS, 15.98 MiB/s [2024-11-28T11:47:49.178Z] 4095.78 IOPS, 16.00 MiB/s [2024-11-28T11:47:49.178Z] 4098.10 IOPS, 16.01 MiB/s 00:17:19.052 Latency(us) 00:17:19.052 [2024-11-28T11:47:49.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.052 Verification LBA range: start 0x0 length 0x2000 00:17:19.052 TLSTESTn1 : 10.01 4104.29 16.03 0.00 0.00 31134.82 4259.84 34317.03 00:17:19.052 [2024-11-28T11:47:49.178Z] =================================================================================================================== 00:17:19.052 [2024-11-28T11:47:49.178Z] Total : 4104.29 16.03 0.00 0.00 31134.82 4259.84 34317.03 00:17:19.052 { 00:17:19.052 "results": [ 00:17:19.052 { 00:17:19.052 "job": "TLSTESTn1", 00:17:19.052 "core_mask": "0x4", 00:17:19.052 "workload": "verify", 00:17:19.052 "status": "finished", 00:17:19.052 "verify_range": { 00:17:19.052 "start": 0, 00:17:19.052 "length": 8192 00:17:19.052 }, 00:17:19.052 "queue_depth": 128, 00:17:19.052 "io_size": 4096, 00:17:19.052 "runtime": 10.014889, 00:17:19.052 "iops": 4104.28912392339, 00:17:19.052 "mibps": 16.032379390325744, 00:17:19.052 "io_failed": 0, 00:17:19.052 "io_timeout": 0, 00:17:19.052 "avg_latency_us": 31134.823816837114, 00:17:19.052 "min_latency_us": 4259.84, 00:17:19.052 "max_latency_us": 34317.03272727273 00:17:19.052 } 00:17:19.052 ], 00:17:19.052 "core_count": 1 00:17:19.052 } 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:19.052 nvmf_trace.0 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 87011 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 87011 ']' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 87011 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87011 00:17:19.052 killing process with pid 87011 00:17:19.052 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.052 00:17:19.052 Latency(us) 00:17:19.052 [2024-11-28T11:47:49.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.052 [2024-11-28T11:47:49.178Z] =================================================================================================================== 00:17:19.052 [2024-11-28T11:47:49.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87011' 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 87011 00:17:19.052 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 87011 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.052 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.052 rmmod nvme_tcp 00:17:19.312 rmmod nvme_fabrics 00:17:19.312 rmmod nvme_keyring 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86969 ']' 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86969 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86969 ']' 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86969 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86969 00:17:19.312 killing process with pid 86969 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86969' 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86969 00:17:19.312 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86969 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:19.571 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.572 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.572 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.572 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.qsy 00:17:19.831 ************************************ 00:17:19.831 END TEST nvmf_fips 00:17:19.831 ************************************ 00:17:19.831 00:17:19.831 real 0m15.103s 00:17:19.831 user 0m20.235s 00:17:19.831 sys 0m6.358s 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.831 ************************************ 00:17:19.831 START TEST nvmf_control_msg_list 00:17:19.831 ************************************ 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:19.831 * Looking for test storage... 00:17:19.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.831 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.832 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:20.093 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:20.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.093 --rc genhtml_branch_coverage=1 00:17:20.093 --rc genhtml_function_coverage=1 00:17:20.093 --rc genhtml_legend=1 00:17:20.093 --rc geninfo_all_blocks=1 00:17:20.093 --rc geninfo_unexecuted_blocks=1 00:17:20.093 00:17:20.093 ' 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:20.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.093 --rc genhtml_branch_coverage=1 00:17:20.093 --rc genhtml_function_coverage=1 00:17:20.093 --rc genhtml_legend=1 00:17:20.093 --rc geninfo_all_blocks=1 00:17:20.093 --rc geninfo_unexecuted_blocks=1 00:17:20.093 00:17:20.093 ' 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:20.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.093 --rc genhtml_branch_coverage=1 00:17:20.093 --rc genhtml_function_coverage=1 00:17:20.093 --rc genhtml_legend=1 00:17:20.093 --rc geninfo_all_blocks=1 00:17:20.093 --rc geninfo_unexecuted_blocks=1 00:17:20.093 00:17:20.093 ' 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:20.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.093 --rc genhtml_branch_coverage=1 00:17:20.093 --rc genhtml_function_coverage=1 00:17:20.093 --rc genhtml_legend=1 00:17:20.093 --rc geninfo_all_blocks=1 00:17:20.093 --rc geninfo_unexecuted_blocks=1 00:17:20.093 00:17:20.093 ' 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:20.093 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:20.094 Cannot find device "nvmf_init_br" 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:20.094 Cannot find device "nvmf_init_br2" 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:20.094 Cannot find device "nvmf_tgt_br" 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:20.094 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.094 Cannot find device "nvmf_tgt_br2" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:20.095 Cannot find device "nvmf_init_br" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:20.095 Cannot find device "nvmf_init_br2" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:20.095 Cannot find device "nvmf_tgt_br" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:20.095 Cannot find device "nvmf_tgt_br2" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:20.095 Cannot find device "nvmf_br" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:20.095 Cannot find device "nvmf_init_if" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:20.095 Cannot find device "nvmf_init_if2" 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.095 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:20.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:17:20.355 00:17:20.355 --- 10.0.0.3 ping statistics --- 00:17:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.355 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:20.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:20.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:20.355 00:17:20.355 --- 10.0.0.4 ping statistics --- 00:17:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.355 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:20.355 00:17:20.355 --- 10.0.0.1 ping statistics --- 00:17:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.355 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:20.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:20.355 00:17:20.355 --- 10.0.0.2 ping statistics --- 00:17:20.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.355 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=87408 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 87408 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 87408 ']' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.355 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.615 [2024-11-28 11:47:50.532285] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:20.615 [2024-11-28 11:47:50.532395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.615 [2024-11-28 11:47:50.660903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.615 [2024-11-28 11:47:50.692365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.874 [2024-11-28 11:47:50.744361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.874 [2024-11-28 11:47:50.744437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.874 [2024-11-28 11:47:50.744451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.874 [2024-11-28 11:47:50.744462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.874 [2024-11-28 11:47:50.744471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.874 [2024-11-28 11:47:50.745002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.874 [2024-11-28 11:47:50.822920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 [2024-11-28 11:47:50.957547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.874 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.875 Malloc0 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.875 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:21.134 [2024-11-28 11:47:50.999792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=87433 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=87434 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=87435 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 87433 00:17:21.134 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:21.134 [2024-11-28 11:47:51.188343] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:21.134 [2024-11-28 11:47:51.198292] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:21.134 [2024-11-28 11:47:51.199012] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.512 Initializing NVMe Controllers 00:17:22.512 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:22.512 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:22.512 Initialization complete. Launching workers. 00:17:22.512 ======================================================== 00:17:22.512 Latency(us) 00:17:22.512 Device Information : IOPS MiB/s Average min max 00:17:22.512 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3429.00 13.39 291.31 126.28 645.49 00:17:22.512 ======================================================== 00:17:22.512 Total : 3429.00 13.39 291.31 126.28 645.49 00:17:22.512 00:17:22.512 Initializing NVMe Controllers 00:17:22.512 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:22.512 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:22.512 Initialization complete. Launching workers. 00:17:22.512 ======================================================== 00:17:22.512 Latency(us) 00:17:22.512 Device Information : IOPS MiB/s Average min max 00:17:22.512 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3492.00 13.64 285.98 146.13 653.61 00:17:22.512 ======================================================== 00:17:22.512 Total : 3492.00 13.64 285.98 146.13 653.61 00:17:22.512 00:17:22.512 Initializing NVMe Controllers 00:17:22.512 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:22.512 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:22.512 Initialization complete. Launching workers. 00:17:22.512 ======================================================== 00:17:22.512 Latency(us) 00:17:22.512 Device Information : IOPS MiB/s Average min max 00:17:22.512 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3488.00 13.62 286.43 167.99 641.72 00:17:22.512 ======================================================== 00:17:22.512 Total : 3488.00 13.62 286.43 167.99 641.72 00:17:22.512 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 87434 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 87435 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.512 rmmod nvme_tcp 00:17:22.512 rmmod nvme_fabrics 00:17:22.512 rmmod nvme_keyring 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 87408 ']' 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 87408 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 87408 ']' 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 87408 00:17:22.512 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87408 00:17:22.513 killing process with pid 87408 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87408' 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 87408 00:17:22.513 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 87408 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.772 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:23.031 00:17:23.031 real 0m3.115s 00:17:23.031 user 0m4.896s 00:17:23.031 sys 0m1.404s 00:17:23.031 ************************************ 00:17:23.031 END TEST nvmf_control_msg_list 00:17:23.031 ************************************ 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.031 ************************************ 00:17:23.031 START TEST nvmf_wait_for_buf 00:17:23.031 ************************************ 00:17:23.031 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:23.031 * Looking for test storage... 00:17:23.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:23.031 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:23.031 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:23.031 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.291 --rc genhtml_branch_coverage=1 00:17:23.291 --rc genhtml_function_coverage=1 00:17:23.291 --rc genhtml_legend=1 00:17:23.291 --rc geninfo_all_blocks=1 00:17:23.291 --rc geninfo_unexecuted_blocks=1 00:17:23.291 00:17:23.291 ' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.291 --rc genhtml_branch_coverage=1 00:17:23.291 --rc genhtml_function_coverage=1 00:17:23.291 --rc genhtml_legend=1 00:17:23.291 --rc geninfo_all_blocks=1 00:17:23.291 --rc geninfo_unexecuted_blocks=1 00:17:23.291 00:17:23.291 ' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.291 --rc genhtml_branch_coverage=1 00:17:23.291 --rc genhtml_function_coverage=1 00:17:23.291 --rc genhtml_legend=1 00:17:23.291 --rc geninfo_all_blocks=1 00:17:23.291 --rc geninfo_unexecuted_blocks=1 00:17:23.291 00:17:23.291 ' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:23.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.291 --rc genhtml_branch_coverage=1 00:17:23.291 --rc genhtml_function_coverage=1 00:17:23.291 --rc genhtml_legend=1 00:17:23.291 --rc geninfo_all_blocks=1 00:17:23.291 --rc geninfo_unexecuted_blocks=1 00:17:23.291 00:17:23.291 ' 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:23.291 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.292 Cannot find device "nvmf_init_br" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.292 Cannot find device "nvmf_init_br2" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.292 Cannot find device "nvmf_tgt_br" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.292 Cannot find device "nvmf_tgt_br2" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.292 Cannot find device "nvmf_init_br" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.292 Cannot find device "nvmf_init_br2" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.292 Cannot find device "nvmf_tgt_br" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.292 Cannot find device "nvmf_tgt_br2" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.292 Cannot find device "nvmf_br" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.292 Cannot find device "nvmf_init_if" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.292 Cannot find device "nvmf_init_if2" 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.292 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.293 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.293 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.552 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:23.553 00:17:23.553 --- 10.0.0.3 ping statistics --- 00:17:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.553 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:17:23.553 00:17:23.553 --- 10.0.0.4 ping statistics --- 00:17:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.553 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:23.553 00:17:23.553 --- 10.0.0.1 ping statistics --- 00:17:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.553 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:23.553 00:17:23.553 --- 10.0.0.2 ping statistics --- 00:17:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.553 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=87668 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 87668 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 87668 ']' 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.553 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:23.812 [2024-11-28 11:47:53.729999] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:23.812 [2024-11-28 11:47:53.730106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.812 [2024-11-28 11:47:53.852407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:23.812 [2024-11-28 11:47:53.872676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.812 [2024-11-28 11:47:53.927577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.812 [2024-11-28 11:47:53.927638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.812 [2024-11-28 11:47:53.927648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.812 [2024-11-28 11:47:53.927655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.812 [2024-11-28 11:47:53.927661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.812 [2024-11-28 11:47:53.928098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:24.071 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.072 [2024-11-28 11:47:54.120383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.072 Malloc0 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.072 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.331 [2024-11-28 11:47:54.200048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.331 [2024-11-28 11:47:54.224235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.331 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:24.331 [2024-11-28 11:47:54.420494] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:25.706 Initializing NVMe Controllers 00:17:25.706 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:25.706 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:25.706 Initialization complete. Launching workers. 00:17:25.706 ======================================================== 00:17:25.706 Latency(us) 00:17:25.706 Device Information : IOPS MiB/s Average min max 00:17:25.706 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 484.00 60.50 8322.39 5049.31 15048.74 00:17:25.706 ======================================================== 00:17:25.706 Total : 484.00 60.50 8322.39 5049.31 15048.74 00:17:25.706 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4598 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4598 -eq 0 ]] 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.706 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.706 rmmod nvme_tcp 00:17:25.966 rmmod nvme_fabrics 00:17:25.966 rmmod nvme_keyring 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 87668 ']' 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 87668 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 87668 ']' 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 87668 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87668 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.966 killing process with pid 87668 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87668' 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 87668 00:17:25.966 11:47:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 87668 00:17:26.225 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.225 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.225 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.225 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:26.225 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:26.226 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:26.485 00:17:26.485 real 0m3.426s 00:17:26.485 user 0m2.663s 00:17:26.485 sys 0m0.882s 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.485 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.485 ************************************ 00:17:26.485 END TEST nvmf_wait_for_buf 00:17:26.485 ************************************ 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.486 ************************************ 00:17:26.486 START TEST nvmf_fuzz 00:17:26.486 ************************************ 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:26.486 * Looking for test storage... 00:17:26.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.486 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:26.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.746 --rc genhtml_branch_coverage=1 00:17:26.746 --rc genhtml_function_coverage=1 00:17:26.746 --rc genhtml_legend=1 00:17:26.746 --rc geninfo_all_blocks=1 00:17:26.746 --rc geninfo_unexecuted_blocks=1 00:17:26.746 00:17:26.746 ' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:26.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.746 --rc genhtml_branch_coverage=1 00:17:26.746 --rc genhtml_function_coverage=1 00:17:26.746 --rc genhtml_legend=1 00:17:26.746 --rc geninfo_all_blocks=1 00:17:26.746 --rc geninfo_unexecuted_blocks=1 00:17:26.746 00:17:26.746 ' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:26.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.746 --rc genhtml_branch_coverage=1 00:17:26.746 --rc genhtml_function_coverage=1 00:17:26.746 --rc genhtml_legend=1 00:17:26.746 --rc geninfo_all_blocks=1 00:17:26.746 --rc geninfo_unexecuted_blocks=1 00:17:26.746 00:17:26.746 ' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:26.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.746 --rc genhtml_branch_coverage=1 00:17:26.746 --rc genhtml_function_coverage=1 00:17:26.746 --rc genhtml_legend=1 00:17:26.746 --rc geninfo_all_blocks=1 00:17:26.746 --rc geninfo_unexecuted_blocks=1 00:17:26.746 00:17:26.746 ' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.746 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:26.747 Cannot find device "nvmf_init_br" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:26.747 Cannot find device "nvmf_init_br2" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:26.747 Cannot find device "nvmf_tgt_br" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.747 Cannot find device "nvmf_tgt_br2" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:26.747 Cannot find device "nvmf_init_br" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:26.747 Cannot find device "nvmf_init_br2" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:26.747 Cannot find device "nvmf_tgt_br" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:26.747 Cannot find device "nvmf_tgt_br2" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:26.747 Cannot find device "nvmf_br" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:26.747 Cannot find device "nvmf_init_if" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:26.747 Cannot find device "nvmf_init_if2" 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.747 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:27.007 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:27.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:17:27.007 00:17:27.007 --- 10.0.0.3 ping statistics --- 00:17:27.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.007 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:27.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:27.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:17:27.007 00:17:27.007 --- 10.0.0.4 ping statistics --- 00:17:27.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.007 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:27.007 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:27.007 00:17:27.008 --- 10.0.0.1 ping statistics --- 00:17:27.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.008 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:27.008 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:27.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:17:27.268 00:17:27.268 --- 10.0.0.2 ping statistics --- 00:17:27.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.268 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=87936 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 87936 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 87936 ']' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.268 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.527 Malloc0 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.527 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:17:27.787 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:17:28.046 Shutting down the fuzz application 00:17:28.046 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:28.306 Shutting down the fuzz application 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.306 rmmod nvme_tcp 00:17:28.306 rmmod nvme_fabrics 00:17:28.306 rmmod nvme_keyring 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 87936 ']' 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 87936 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 87936 ']' 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 87936 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87936 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87936' 00:17:28.306 killing process with pid 87936 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 87936 00:17:28.306 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 87936 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:28.874 00:17:28.874 real 0m2.503s 00:17:28.874 user 0m2.065s 00:17:28.874 sys 0m0.811s 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.874 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.874 ************************************ 00:17:28.874 END TEST nvmf_fuzz 00:17:28.874 ************************************ 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.135 ************************************ 00:17:29.135 START TEST nvmf_multiconnection 00:17:29.135 ************************************ 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:29.135 * Looking for test storage... 00:17:29.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.135 --rc genhtml_branch_coverage=1 00:17:29.135 --rc genhtml_function_coverage=1 00:17:29.135 --rc genhtml_legend=1 00:17:29.135 --rc geninfo_all_blocks=1 00:17:29.135 --rc geninfo_unexecuted_blocks=1 00:17:29.135 00:17:29.135 ' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.135 --rc genhtml_branch_coverage=1 00:17:29.135 --rc genhtml_function_coverage=1 00:17:29.135 --rc genhtml_legend=1 00:17:29.135 --rc geninfo_all_blocks=1 00:17:29.135 --rc geninfo_unexecuted_blocks=1 00:17:29.135 00:17:29.135 ' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.135 --rc genhtml_branch_coverage=1 00:17:29.135 --rc genhtml_function_coverage=1 00:17:29.135 --rc genhtml_legend=1 00:17:29.135 --rc geninfo_all_blocks=1 00:17:29.135 --rc geninfo_unexecuted_blocks=1 00:17:29.135 00:17:29.135 ' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.135 --rc genhtml_branch_coverage=1 00:17:29.135 --rc genhtml_function_coverage=1 00:17:29.135 --rc genhtml_legend=1 00:17:29.135 --rc geninfo_all_blocks=1 00:17:29.135 --rc geninfo_unexecuted_blocks=1 00:17:29.135 00:17:29.135 ' 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.135 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.136 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.395 Cannot find device "nvmf_init_br" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.395 Cannot find device "nvmf_init_br2" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.395 Cannot find device "nvmf_tgt_br" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.395 Cannot find device "nvmf_tgt_br2" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.395 Cannot find device "nvmf_init_br" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.395 Cannot find device "nvmf_init_br2" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:29.395 Cannot find device "nvmf_tgt_br" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:29.395 Cannot find device "nvmf_tgt_br2" 00:17:29.395 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:29.396 Cannot find device "nvmf_br" 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:29.396 Cannot find device "nvmf_init_if" 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:29.396 Cannot find device "nvmf_init_if2" 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:29.396 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:17:29.655 00:17:29.655 --- 10.0.0.3 ping statistics --- 00:17:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.655 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:17:29.655 00:17:29.655 --- 10.0.0.4 ping statistics --- 00:17:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.655 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:29.655 00:17:29.655 --- 10.0.0.1 ping statistics --- 00:17:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.655 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:29.655 00:17:29.655 --- 10.0.0.2 ping statistics --- 00:17:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.655 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=88175 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 88175 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 88175 ']' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.655 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:29.656 [2024-11-28 11:47:59.750463] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:29.656 [2024-11-28 11:47:59.750562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.915 [2024-11-28 11:47:59.878948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:29.915 [2024-11-28 11:47:59.904212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.915 [2024-11-28 11:47:59.951337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.915 [2024-11-28 11:47:59.951729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.915 [2024-11-28 11:47:59.951916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.915 [2024-11-28 11:47:59.951976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.915 [2024-11-28 11:47:59.952118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.915 [2024-11-28 11:47:59.953542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.915 [2024-11-28 11:47:59.953668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.915 [2024-11-28 11:47:59.954377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.915 [2024-11-28 11:47:59.954389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.915 [2024-11-28 11:48:00.026613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.174 [2024-11-28 11:48:00.150957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.174 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 Malloc1 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 [2024-11-28 11:48:00.235512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 Malloc2 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.175 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 Malloc3 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 Malloc4 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 Malloc5 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 Malloc6 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 Malloc7 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:30.478 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.478 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 Malloc8 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 Malloc9 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 Malloc10 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.738 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.739 Malloc11 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.739 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:30.997 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:30.997 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.997 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.997 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:30.997 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:32.901 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:32.901 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:32.901 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:17:32.901 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:32.901 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.901 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:32.901 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:32.901 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:17:33.160 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:33.160 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:33.160 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.160 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:33.160 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:35.064 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:17:35.322 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:35.322 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:35.322 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.322 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:35.322 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:37.226 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:17:37.485 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:37.485 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:37.485 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.485 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:37.485 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:39.391 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:17:39.650 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:39.650 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:39.650 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.650 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:39.650 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:41.559 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:17:41.816 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:41.816 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:41.816 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.817 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:41.817 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:43.717 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:17:43.976 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:17:43.976 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:43.976 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.976 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:43.976 11:48:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:45.875 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:45.875 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:45.875 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:46.134 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:48.664 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:17:50.567 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:50.568 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.568 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:50.568 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.470 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:17:52.733 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:17:52.733 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.733 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.733 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.733 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:17:54.634 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:17:54.892 [global] 00:17:54.892 thread=1 00:17:54.892 invalidate=1 00:17:54.892 rw=read 00:17:54.892 time_based=1 00:17:54.892 runtime=10 00:17:54.892 ioengine=libaio 00:17:54.892 direct=1 00:17:54.892 bs=262144 00:17:54.892 iodepth=64 00:17:54.892 norandommap=1 00:17:54.892 numjobs=1 00:17:54.892 00:17:54.892 [job0] 00:17:54.892 filename=/dev/nvme0n1 00:17:54.892 [job1] 00:17:54.892 filename=/dev/nvme10n1 00:17:54.892 [job2] 00:17:54.892 filename=/dev/nvme1n1 00:17:54.892 [job3] 00:17:54.892 filename=/dev/nvme2n1 00:17:54.892 [job4] 00:17:54.892 filename=/dev/nvme3n1 00:17:54.892 [job5] 00:17:54.892 filename=/dev/nvme4n1 00:17:54.892 [job6] 00:17:54.892 filename=/dev/nvme5n1 00:17:54.892 [job7] 00:17:54.892 filename=/dev/nvme6n1 00:17:54.892 [job8] 00:17:54.892 filename=/dev/nvme7n1 00:17:54.892 [job9] 00:17:54.892 filename=/dev/nvme8n1 00:17:54.892 [job10] 00:17:54.892 filename=/dev/nvme9n1 00:17:54.892 Could not set queue depth (nvme0n1) 00:17:54.892 Could not set queue depth (nvme10n1) 00:17:54.892 Could not set queue depth (nvme1n1) 00:17:54.892 Could not set queue depth (nvme2n1) 00:17:54.892 Could not set queue depth (nvme3n1) 00:17:54.892 Could not set queue depth (nvme4n1) 00:17:54.892 Could not set queue depth (nvme5n1) 00:17:54.892 Could not set queue depth (nvme6n1) 00:17:54.892 Could not set queue depth (nvme7n1) 00:17:54.892 Could not set queue depth (nvme8n1) 00:17:54.892 Could not set queue depth (nvme9n1) 00:17:55.150 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:55.150 fio-3.35 00:17:55.150 Starting 11 threads 00:18:07.425 00:18:07.425 job0: (groupid=0, jobs=1): err= 0: pid=88626: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=305, BW=76.3MiB/s (80.0MB/s)(769MiB/10085msec) 00:18:07.425 slat (usec): min=21, max=177942, avg=3207.93, stdev=8668.21 00:18:07.425 clat (msec): min=39, max=394, avg=206.03, stdev=48.07 00:18:07.425 lat (msec): min=40, max=404, avg=209.24, stdev=48.38 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 104], 5.00th=[ 155], 10.00th=[ 169], 20.00th=[ 178], 00:18:07.425 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 199], 00:18:07.425 | 70.00th=[ 213], 80.00th=[ 236], 90.00th=[ 279], 95.00th=[ 317], 00:18:07.425 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 380], 00:18:07.425 | 99.99th=[ 397] 00:18:07.425 bw ( KiB/s): min=43520, max=92160, per=9.56%, avg=77143.25, stdev=15239.59, samples=20 00:18:07.425 iops : min= 170, max= 360, avg=301.30, stdev=59.54, samples=20 00:18:07.425 lat (msec) : 50=0.06%, 100=0.45%, 250=83.69%, 500=15.79% 00:18:07.425 cpu : usr=0.21%, sys=1.37%, ctx=638, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=3077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job1: (groupid=0, jobs=1): err= 0: pid=88627: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=229, BW=57.5MiB/s (60.3MB/s)(584MiB/10158msec) 00:18:07.425 slat (usec): min=21, max=198672, avg=4228.98, stdev=12861.20 00:18:07.425 clat (msec): min=19, max=711, avg=273.56, stdev=167.02 00:18:07.425 lat (msec): min=19, max=711, avg=277.79, stdev=169.08 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 24], 5.00th=[ 87], 10.00th=[ 148], 20.00th=[ 178], 00:18:07.425 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 205], 00:18:07.425 | 70.00th=[ 218], 80.00th=[ 456], 90.00th=[ 575], 95.00th=[ 617], 00:18:07.425 | 99.00th=[ 659], 99.50th=[ 659], 99.90th=[ 667], 99.95th=[ 667], 00:18:07.425 | 99.99th=[ 709] 00:18:07.425 bw ( KiB/s): min=23552, max=138752, per=7.20%, avg=58129.35, stdev=32262.30, samples=20 00:18:07.425 iops : min= 92, max= 542, avg=227.05, stdev=126.01, samples=20 00:18:07.425 lat (msec) : 20=0.21%, 50=3.55%, 100=3.55%, 250=63.98%, 500=12.25% 00:18:07.425 lat (msec) : 750=16.45% 00:18:07.425 cpu : usr=0.13%, sys=1.06%, ctx=502, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job2: (groupid=0, jobs=1): err= 0: pid=88628: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=151, BW=37.9MiB/s (39.8MB/s)(385MiB/10155msec) 00:18:07.425 slat (usec): min=23, max=249197, avg=6489.64, stdev=18574.11 00:18:07.425 clat (msec): min=146, max=659, avg=414.62, stdev=94.14 00:18:07.425 lat (msec): min=161, max=695, avg=421.10, stdev=94.74 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 167], 5.00th=[ 207], 10.00th=[ 271], 20.00th=[ 334], 00:18:07.425 | 30.00th=[ 384], 40.00th=[ 418], 50.00th=[ 435], 60.00th=[ 451], 00:18:07.425 | 70.00th=[ 468], 80.00th=[ 493], 90.00th=[ 523], 95.00th=[ 542], 00:18:07.425 | 99.00th=[ 567], 99.50th=[ 592], 99.90th=[ 659], 99.95th=[ 659], 00:18:07.425 | 99.99th=[ 659] 00:18:07.425 bw ( KiB/s): min=29696, max=50176, per=4.69%, avg=37835.05, stdev=6297.39, samples=20 00:18:07.425 iops : min= 116, max= 196, avg=147.75, stdev=24.55, samples=20 00:18:07.425 lat (msec) : 250=5.84%, 500=76.57%, 750=17.59% 00:18:07.425 cpu : usr=0.08%, sys=0.78%, ctx=285, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=1541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job3: (groupid=0, jobs=1): err= 0: pid=88629: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=157, BW=39.4MiB/s (41.3MB/s)(400MiB/10156msec) 00:18:07.425 slat (usec): min=18, max=126904, avg=6244.43, stdev=16450.60 00:18:07.425 clat (msec): min=27, max=677, avg=399.13, stdev=97.39 00:18:07.425 lat (msec): min=28, max=677, avg=405.38, stdev=98.29 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 92], 5.00th=[ 249], 10.00th=[ 279], 20.00th=[ 313], 00:18:07.425 | 30.00th=[ 359], 40.00th=[ 393], 50.00th=[ 422], 60.00th=[ 443], 00:18:07.425 | 70.00th=[ 464], 80.00th=[ 485], 90.00th=[ 502], 95.00th=[ 514], 00:18:07.425 | 99.00th=[ 542], 99.50th=[ 600], 99.90th=[ 625], 99.95th=[ 676], 00:18:07.425 | 99.99th=[ 676] 00:18:07.425 bw ( KiB/s): min=31232, max=62976, per=4.87%, avg=39344.20, stdev=8060.95, samples=20 00:18:07.425 iops : min= 122, max= 246, avg=153.60, stdev=31.52, samples=20 00:18:07.425 lat (msec) : 50=0.19%, 100=0.88%, 250=4.19%, 500=83.44%, 750=11.31% 00:18:07.425 cpu : usr=0.08%, sys=0.82%, ctx=318, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job4: (groupid=0, jobs=1): err= 0: pid=88630: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=522, BW=131MiB/s (137MB/s)(1311MiB/10038msec) 00:18:07.425 slat (usec): min=23, max=158640, avg=1902.37, stdev=5643.59 00:18:07.425 clat (msec): min=25, max=311, avg=120.42, stdev=51.56 00:18:07.425 lat (msec): min=25, max=357, avg=122.32, stdev=52.16 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 78], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 92], 00:18:07.425 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 102], 00:18:07.425 | 70.00th=[ 106], 80.00th=[ 130], 90.00th=[ 226], 95.00th=[ 247], 00:18:07.425 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:18:07.425 | 99.99th=[ 313] 00:18:07.425 bw ( KiB/s): min=43520, max=173568, per=16.42%, avg=132575.55, stdev=46321.54, samples=20 00:18:07.425 iops : min= 170, max= 678, avg=517.80, stdev=180.89, samples=20 00:18:07.425 lat (msec) : 50=0.23%, 100=54.30%, 250=41.60%, 500=3.87% 00:18:07.425 cpu : usr=0.33%, sys=2.19%, ctx=1089, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=5243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job5: (groupid=0, jobs=1): err= 0: pid=88631: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=515, BW=129MiB/s (135MB/s)(1293MiB/10032msec) 00:18:07.425 slat (usec): min=20, max=276292, avg=1929.24, stdev=6701.92 00:18:07.425 clat (msec): min=28, max=424, avg=122.01, stdev=59.22 00:18:07.425 lat (msec): min=36, max=424, avg=123.94, stdev=59.81 00:18:07.425 clat percentiles (msec): 00:18:07.425 | 1.00th=[ 77], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 92], 00:18:07.425 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 102], 00:18:07.425 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 224], 95.00th=[ 253], 00:18:07.425 | 99.00th=[ 359], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 426], 00:18:07.425 | 99.99th=[ 426] 00:18:07.425 bw ( KiB/s): min=42922, max=172544, per=16.20%, avg=130803.65, stdev=49077.26, samples=20 00:18:07.425 iops : min= 167, max= 674, avg=510.75, stdev=191.73, samples=20 00:18:07.425 lat (msec) : 50=0.21%, 100=54.29%, 250=39.98%, 500=5.51% 00:18:07.425 cpu : usr=0.34%, sys=2.31%, ctx=1046, majf=0, minf=4097 00:18:07.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:07.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.425 issued rwts: total=5170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.425 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.425 job6: (groupid=0, jobs=1): err= 0: pid=88633: Thu Nov 28 11:48:35 2024 00:18:07.425 read: IOPS=148, BW=37.2MiB/s (39.1MB/s)(378MiB/10148msec) 00:18:07.425 slat (usec): min=23, max=215143, avg=6612.04, stdev=19433.98 00:18:07.426 clat (msec): min=142, max=676, avg=422.12, stdev=79.32 00:18:07.426 lat (msec): min=151, max=676, avg=428.73, stdev=79.38 00:18:07.426 clat percentiles (msec): 00:18:07.426 | 1.00th=[ 220], 5.00th=[ 288], 10.00th=[ 313], 20.00th=[ 351], 00:18:07.426 | 30.00th=[ 376], 40.00th=[ 414], 50.00th=[ 435], 60.00th=[ 451], 00:18:07.426 | 70.00th=[ 472], 80.00th=[ 493], 90.00th=[ 514], 95.00th=[ 527], 00:18:07.426 | 99.00th=[ 592], 99.50th=[ 625], 99.90th=[ 676], 99.95th=[ 676], 00:18:07.426 | 99.99th=[ 676] 00:18:07.426 bw ( KiB/s): min=29125, max=47104, per=4.59%, avg=37093.40, stdev=5156.25, samples=20 00:18:07.426 iops : min= 113, max= 184, avg=144.70, stdev=20.22, samples=20 00:18:07.426 lat (msec) : 250=1.52%, 500=81.75%, 750=16.73% 00:18:07.426 cpu : usr=0.05%, sys=0.74%, ctx=273, majf=0, minf=4097 00:18:07.426 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:18:07.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.426 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.426 issued rwts: total=1512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.426 job7: (groupid=0, jobs=1): err= 0: pid=88634: Thu Nov 28 11:48:35 2024 00:18:07.426 read: IOPS=155, BW=38.9MiB/s (40.8MB/s)(395MiB/10152msec) 00:18:07.426 slat (usec): min=22, max=219933, avg=6326.65, stdev=17044.88 00:18:07.426 clat (msec): min=30, max=653, avg=403.70, stdev=98.26 00:18:07.426 lat (msec): min=31, max=653, avg=410.03, stdev=99.21 00:18:07.426 clat percentiles (msec): 00:18:07.426 | 1.00th=[ 75], 5.00th=[ 266], 10.00th=[ 284], 20.00th=[ 313], 00:18:07.426 | 30.00th=[ 351], 40.00th=[ 380], 50.00th=[ 401], 60.00th=[ 426], 00:18:07.426 | 70.00th=[ 460], 80.00th=[ 502], 90.00th=[ 542], 95.00th=[ 558], 00:18:07.426 | 99.00th=[ 600], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:18:07.426 | 99.99th=[ 651] 00:18:07.426 bw ( KiB/s): min=27136, max=50176, per=4.81%, avg=38858.30, stdev=7754.91, samples=20 00:18:07.426 iops : min= 106, max= 196, avg=151.70, stdev=30.35, samples=20 00:18:07.426 lat (msec) : 50=0.06%, 100=1.01%, 250=1.08%, 500=77.61%, 750=20.24% 00:18:07.426 cpu : usr=0.09%, sys=0.76%, ctx=312, majf=0, minf=4097 00:18:07.426 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:18:07.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.426 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.426 issued rwts: total=1581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.426 job8: (groupid=0, jobs=1): err= 0: pid=88638: Thu Nov 28 11:48:35 2024 00:18:07.426 read: IOPS=329, BW=82.3MiB/s (86.3MB/s)(829MiB/10072msec) 00:18:07.426 slat (usec): min=20, max=46797, avg=2931.23, stdev=6640.54 00:18:07.426 clat (msec): min=16, max=380, avg=191.24, stdev=44.26 00:18:07.426 lat (msec): min=16, max=390, avg=194.17, stdev=44.96 00:18:07.426 clat percentiles (msec): 00:18:07.426 | 1.00th=[ 49], 5.00th=[ 144], 10.00th=[ 163], 20.00th=[ 174], 00:18:07.426 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:18:07.426 | 70.00th=[ 199], 80.00th=[ 203], 90.00th=[ 213], 95.00th=[ 305], 00:18:07.426 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 372], 00:18:07.426 | 99.99th=[ 380] 00:18:07.426 bw ( KiB/s): min=48128, max=106496, per=10.31%, avg=83225.40, stdev=11763.10, samples=20 00:18:07.426 iops : min= 188, max= 416, avg=325.00, stdev=45.93, samples=20 00:18:07.426 lat (msec) : 20=0.57%, 50=0.57%, 100=1.69%, 250=90.92%, 500=6.24% 00:18:07.426 cpu : usr=0.21%, sys=1.46%, ctx=775, majf=0, minf=4097 00:18:07.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:07.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.426 issued rwts: total=3316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.426 job9: (groupid=0, jobs=1): err= 0: pid=88639: Thu Nov 28 11:48:35 2024 00:18:07.426 read: IOPS=335, BW=84.0MiB/s (88.0MB/s)(847MiB/10083msec) 00:18:07.426 slat (usec): min=23, max=48913, avg=2872.09, stdev=6484.54 00:18:07.426 clat (msec): min=13, max=295, avg=187.43, stdev=21.83 00:18:07.426 lat (msec): min=13, max=300, avg=190.31, stdev=22.15 00:18:07.426 clat percentiles (msec): 00:18:07.426 | 1.00th=[ 92], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 178], 00:18:07.426 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:18:07.426 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 213], 00:18:07.426 | 99.00th=[ 228], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 284], 00:18:07.426 | 99.99th=[ 296] 00:18:07.426 bw ( KiB/s): min=80384, max=92672, per=10.53%, avg=85009.70, stdev=2903.16, samples=20 00:18:07.426 iops : min= 314, max= 362, avg=332.00, stdev=11.38, samples=20 00:18:07.426 lat (msec) : 20=0.03%, 100=1.15%, 250=98.11%, 500=0.71% 00:18:07.426 cpu : usr=0.17%, sys=1.49%, ctx=758, majf=0, minf=4097 00:18:07.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:18:07.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.426 issued rwts: total=3386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.426 job10: (groupid=0, jobs=1): err= 0: pid=88640: Thu Nov 28 11:48:35 2024 00:18:07.426 read: IOPS=324, BW=81.1MiB/s (85.0MB/s)(818MiB/10086msec) 00:18:07.426 slat (usec): min=18, max=49380, avg=3002.00, stdev=6584.88 00:18:07.426 clat (msec): min=21, max=376, avg=193.84, stdev=42.58 00:18:07.426 lat (msec): min=21, max=376, avg=196.84, stdev=43.27 00:18:07.426 clat percentiles (msec): 00:18:07.426 | 1.00th=[ 83], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 176], 00:18:07.426 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:18:07.426 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 218], 95.00th=[ 309], 00:18:07.426 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 372], 00:18:07.426 | 99.99th=[ 376] 00:18:07.426 bw ( KiB/s): min=48128, max=91648, per=10.17%, avg=82135.50, stdev=11491.83, samples=20 00:18:07.426 iops : min= 188, max= 358, avg=320.80, stdev=44.93, samples=20 00:18:07.426 lat (msec) : 50=0.43%, 100=1.47%, 250=91.26%, 500=6.85% 00:18:07.426 cpu : usr=0.16%, sys=1.62%, ctx=775, majf=0, minf=4097 00:18:07.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:07.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:07.426 issued rwts: total=3272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.426 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.426 00:18:07.426 Run status group 0 (all jobs): 00:18:07.426 READ: bw=788MiB/s (827MB/s), 37.2MiB/s-131MiB/s (39.1MB/s-137MB/s), io=8008MiB (8397MB), run=10032-10158msec 00:18:07.426 00:18:07.426 Disk stats (read/write): 00:18:07.426 nvme0n1: ios=6039/0, merge=0/0, ticks=1232748/0, in_queue=1232748, util=97.64% 00:18:07.426 nvme10n1: ios=4543/0, merge=0/0, ticks=1211333/0, in_queue=1211333, util=97.85% 00:18:07.426 nvme1n1: ios=2963/0, merge=0/0, ticks=1220385/0, in_queue=1220385, util=98.08% 00:18:07.426 nvme2n1: ios=3089/0, merge=0/0, ticks=1220662/0, in_queue=1220662, util=98.16% 00:18:07.426 nvme3n1: ios=10383/0, merge=0/0, ticks=1237647/0, in_queue=1237647, util=98.31% 00:18:07.426 nvme4n1: ios=10219/0, merge=0/0, ticks=1235835/0, in_queue=1235835, util=98.31% 00:18:07.426 nvme5n1: ios=2906/0, merge=0/0, ticks=1213442/0, in_queue=1213442, util=98.50% 00:18:07.426 nvme6n1: ios=3049/0, merge=0/0, ticks=1215866/0, in_queue=1215866, util=98.61% 00:18:07.426 nvme7n1: ios=6501/0, merge=0/0, ticks=1229736/0, in_queue=1229736, util=98.71% 00:18:07.426 nvme8n1: ios=6634/0, merge=0/0, ticks=1233111/0, in_queue=1233111, util=98.83% 00:18:07.426 nvme9n1: ios=6425/0, merge=0/0, ticks=1232137/0, in_queue=1232137, util=99.03% 00:18:07.426 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:07.426 [global] 00:18:07.426 thread=1 00:18:07.426 invalidate=1 00:18:07.426 rw=randwrite 00:18:07.426 time_based=1 00:18:07.426 runtime=10 00:18:07.426 ioengine=libaio 00:18:07.426 direct=1 00:18:07.426 bs=262144 00:18:07.426 iodepth=64 00:18:07.426 norandommap=1 00:18:07.426 numjobs=1 00:18:07.426 00:18:07.426 [job0] 00:18:07.426 filename=/dev/nvme0n1 00:18:07.426 [job1] 00:18:07.426 filename=/dev/nvme10n1 00:18:07.426 [job2] 00:18:07.426 filename=/dev/nvme1n1 00:18:07.426 [job3] 00:18:07.426 filename=/dev/nvme2n1 00:18:07.426 [job4] 00:18:07.426 filename=/dev/nvme3n1 00:18:07.426 [job5] 00:18:07.426 filename=/dev/nvme4n1 00:18:07.426 [job6] 00:18:07.426 filename=/dev/nvme5n1 00:18:07.426 [job7] 00:18:07.426 filename=/dev/nvme6n1 00:18:07.426 [job8] 00:18:07.426 filename=/dev/nvme7n1 00:18:07.426 [job9] 00:18:07.426 filename=/dev/nvme8n1 00:18:07.426 [job10] 00:18:07.426 filename=/dev/nvme9n1 00:18:07.426 Could not set queue depth (nvme0n1) 00:18:07.426 Could not set queue depth (nvme10n1) 00:18:07.426 Could not set queue depth (nvme1n1) 00:18:07.426 Could not set queue depth (nvme2n1) 00:18:07.426 Could not set queue depth (nvme3n1) 00:18:07.426 Could not set queue depth (nvme4n1) 00:18:07.426 Could not set queue depth (nvme5n1) 00:18:07.426 Could not set queue depth (nvme6n1) 00:18:07.426 Could not set queue depth (nvme7n1) 00:18:07.426 Could not set queue depth (nvme8n1) 00:18:07.426 Could not set queue depth (nvme9n1) 00:18:07.426 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.426 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.427 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.427 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.427 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.427 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:07.427 fio-3.35 00:18:07.427 Starting 11 threads 00:18:17.407 00:18:17.407 job0: (groupid=0, jobs=1): err= 0: pid=88836: Thu Nov 28 11:48:46 2024 00:18:17.407 write: IOPS=215, BW=54.0MiB/s (56.6MB/s)(552MiB/10227msec); 0 zone resets 00:18:17.407 slat (usec): min=19, max=32896, avg=4406.95, stdev=7807.71 00:18:17.407 clat (msec): min=33, max=513, avg=291.87, stdev=29.30 00:18:17.407 lat (msec): min=33, max=513, avg=296.28, stdev=28.67 00:18:17.407 clat percentiles (msec): 00:18:17.407 | 1.00th=[ 188], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:18:17.407 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 296], 00:18:17.407 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 326], 00:18:17.407 | 99.00th=[ 405], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 514], 00:18:17.407 | 99.99th=[ 514] 00:18:17.407 bw ( KiB/s): min=45146, max=57344, per=6.00%, avg=54910.75, stdev=2831.10, samples=20 00:18:17.407 iops : min= 176, max= 224, avg=214.45, stdev=11.10, samples=20 00:18:17.407 lat (msec) : 50=0.09%, 250=1.90%, 500=97.92%, 750=0.09% 00:18:17.407 cpu : usr=0.51%, sys=0.82%, ctx=2546, majf=0, minf=1 00:18:17.407 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:18:17.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.407 issued rwts: total=0,2208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.407 job1: (groupid=0, jobs=1): err= 0: pid=88837: Thu Nov 28 11:48:46 2024 00:18:17.407 write: IOPS=310, BW=77.5MiB/s (81.3MB/s)(789MiB/10174msec); 0 zone resets 00:18:17.407 slat (usec): min=23, max=28738, avg=3165.76, stdev=5476.87 00:18:17.407 clat (msec): min=31, max=368, avg=203.06, stdev=21.75 00:18:17.407 lat (msec): min=31, max=368, avg=206.23, stdev=21.40 00:18:17.407 clat percentiles (msec): 00:18:17.407 | 1.00th=[ 89], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 199], 00:18:17.407 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:18:17.407 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 213], 95.00th=[ 213], 00:18:17.407 | 99.00th=[ 266], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:18:17.407 | 99.99th=[ 368] 00:18:17.408 bw ( KiB/s): min=75776, max=81920, per=8.65%, avg=79180.80, stdev=1981.40, samples=20 00:18:17.408 iops : min= 296, max= 320, avg=309.30, stdev= 7.74, samples=20 00:18:17.408 lat (msec) : 50=0.38%, 100=0.76%, 250=97.66%, 500=1.20% 00:18:17.408 cpu : usr=0.84%, sys=0.93%, ctx=3553, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,3156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job2: (groupid=0, jobs=1): err= 0: pid=88850: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=467, BW=117MiB/s (123MB/s)(1182MiB/10113msec); 0 zone resets 00:18:17.408 slat (usec): min=13, max=79842, avg=2110.49, stdev=3843.61 00:18:17.408 clat (msec): min=81, max=264, avg=134.77, stdev=25.19 00:18:17.408 lat (msec): min=81, max=264, avg=136.88, stdev=25.28 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:18:17.408 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:18:17.408 | 70.00th=[ 132], 80.00th=[ 133], 90.00th=[ 136], 95.00th=[ 213], 00:18:17.408 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 264], 99.95th=[ 264], 00:18:17.408 | 99.99th=[ 264] 00:18:17.408 bw ( KiB/s): min=61563, max=129024, per=13.04%, avg=119379.35, stdev=18572.23, samples=20 00:18:17.408 iops : min= 240, max= 504, avg=466.30, stdev=72.63, samples=20 00:18:17.408 lat (msec) : 100=0.17%, 250=99.32%, 500=0.51% 00:18:17.408 cpu : usr=0.77%, sys=1.45%, ctx=6586, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,4727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job3: (groupid=0, jobs=1): err= 0: pid=88851: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=306, BW=76.7MiB/s (80.4MB/s)(779MiB/10166msec); 0 zone resets 00:18:17.408 slat (usec): min=19, max=107239, avg=3202.52, stdev=5788.56 00:18:17.408 clat (msec): min=112, max=368, avg=205.42, stdev=16.13 00:18:17.408 lat (msec): min=113, max=368, avg=208.62, stdev=15.34 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 199], 00:18:17.408 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:18:17.408 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 213], 95.00th=[ 215], 00:18:17.408 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:18:17.408 | 99.99th=[ 368] 00:18:17.408 bw ( KiB/s): min=63615, max=81920, per=8.54%, avg=78180.75, stdev=3788.11, samples=20 00:18:17.408 iops : min= 248, max= 320, avg=305.35, stdev=14.89, samples=20 00:18:17.408 lat (msec) : 250=98.27%, 500=1.73% 00:18:17.408 cpu : usr=0.62%, sys=0.95%, ctx=3828, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,3117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job4: (groupid=0, jobs=1): err= 0: pid=88852: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=404, BW=101MiB/s (106MB/s)(1023MiB/10124msec); 0 zone resets 00:18:17.408 slat (usec): min=19, max=57469, avg=2357.49, stdev=4354.31 00:18:17.408 clat (msec): min=39, max=363, avg=155.88, stdev=38.93 00:18:17.408 lat (msec): min=41, max=366, avg=158.23, stdev=39.31 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 63], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 144], 00:18:17.408 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 153], 00:18:17.408 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 159], 95.00th=[ 230], 00:18:17.408 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 359], 00:18:17.408 | 99.99th=[ 363] 00:18:17.408 bw ( KiB/s): min=53248, max=121344, per=11.27%, avg=103157.35, stdev=15920.07, samples=20 00:18:17.408 iops : min= 208, max= 474, avg=402.95, stdev=62.19, samples=20 00:18:17.408 lat (msec) : 50=0.39%, 100=2.42%, 250=92.67%, 500=4.52% 00:18:17.408 cpu : usr=0.74%, sys=1.37%, ctx=5196, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,4093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job5: (groupid=0, jobs=1): err= 0: pid=88853: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=227, BW=56.9MiB/s (59.6MB/s)(582MiB/10236msec); 0 zone resets 00:18:17.408 slat (usec): min=17, max=30048, avg=4206.84, stdev=7536.83 00:18:17.408 clat (msec): min=32, max=521, avg=276.93, stdev=44.10 00:18:17.408 lat (msec): min=32, max=521, avg=281.13, stdev=44.27 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 89], 5.00th=[ 194], 10.00th=[ 224], 20.00th=[ 271], 00:18:17.408 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 292], 00:18:17.408 | 70.00th=[ 296], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 300], 00:18:17.408 | 99.00th=[ 414], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 523], 00:18:17.408 | 99.99th=[ 523] 00:18:17.408 bw ( KiB/s): min=53248, max=73728, per=6.33%, avg=57998.30, stdev=5607.66, samples=20 00:18:17.408 iops : min= 208, max= 288, avg=226.50, stdev=21.92, samples=20 00:18:17.408 lat (msec) : 50=0.34%, 100=0.69%, 250=15.07%, 500=83.64%, 750=0.26% 00:18:17.408 cpu : usr=0.49%, sys=0.70%, ctx=2727, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,2329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job6: (groupid=0, jobs=1): err= 0: pid=88854: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=213, BW=53.3MiB/s (55.8MB/s)(545MiB/10225msec); 0 zone resets 00:18:17.408 slat (usec): min=18, max=179268, avg=4501.21, stdev=8709.27 00:18:17.408 clat (msec): min=181, max=514, avg=295.80, stdev=29.11 00:18:17.408 lat (msec): min=181, max=514, avg=300.31, stdev=28.24 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 241], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:18:17.408 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 296], 00:18:17.408 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 347], 00:18:17.408 | 99.00th=[ 443], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 514], 00:18:17.408 | 99.99th=[ 514] 00:18:17.408 bw ( KiB/s): min=37888, max=59392, per=5.91%, avg=54138.45, stdev=4801.29, samples=20 00:18:17.408 iops : min= 148, max= 232, avg=211.45, stdev=18.75, samples=20 00:18:17.408 lat (msec) : 250=1.06%, 500=98.85%, 750=0.09% 00:18:17.408 cpu : usr=0.49%, sys=0.65%, ctx=2520, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,2178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job7: (groupid=0, jobs=1): err= 0: pid=88855: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=217, BW=54.4MiB/s (57.1MB/s)(557MiB/10235msec); 0 zone resets 00:18:17.408 slat (usec): min=21, max=42244, avg=4488.13, stdev=7889.67 00:18:17.408 clat (msec): min=43, max=511, avg=289.35, stdev=35.62 00:18:17.408 lat (msec): min=43, max=511, avg=293.84, stdev=35.33 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 111], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 279], 00:18:17.408 | 30.00th=[ 284], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 296], 00:18:17.408 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 330], 00:18:17.408 | 99.00th=[ 401], 99.50th=[ 456], 99.90th=[ 493], 99.95th=[ 510], 00:18:17.408 | 99.99th=[ 510] 00:18:17.408 bw ( KiB/s): min=51200, max=57344, per=6.05%, avg=55412.70, stdev=1686.95, samples=20 00:18:17.408 iops : min= 200, max= 224, avg=216.40, stdev= 6.56, samples=20 00:18:17.408 lat (msec) : 50=0.18%, 100=0.72%, 250=2.78%, 500=96.23%, 750=0.09% 00:18:17.408 cpu : usr=0.57%, sys=0.61%, ctx=2588, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,2228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.408 job8: (groupid=0, jobs=1): err= 0: pid=88857: Thu Nov 28 11:48:46 2024 00:18:17.408 write: IOPS=469, BW=117MiB/s (123MB/s)(1188MiB/10114msec); 0 zone resets 00:18:17.408 slat (usec): min=18, max=23689, avg=2099.11, stdev=3666.17 00:18:17.408 clat (msec): min=26, max=238, avg=134.12, stdev=24.21 00:18:17.408 lat (msec): min=26, max=238, avg=136.22, stdev=24.30 00:18:17.408 clat percentiles (msec): 00:18:17.408 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:18:17.408 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:18:17.408 | 70.00th=[ 132], 80.00th=[ 133], 90.00th=[ 136], 95.00th=[ 205], 00:18:17.408 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:18:17.408 | 99.99th=[ 239] 00:18:17.408 bw ( KiB/s): min=69632, max=129024, per=13.10%, avg=119974.70, stdev=16675.86, samples=20 00:18:17.408 iops : min= 272, max= 504, avg=468.65, stdev=65.14, samples=20 00:18:17.408 lat (msec) : 50=0.25%, 100=0.34%, 250=99.41% 00:18:17.408 cpu : usr=0.93%, sys=1.59%, ctx=5905, majf=0, minf=1 00:18:17.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:17.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.408 issued rwts: total=0,4750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.409 job9: (groupid=0, jobs=1): err= 0: pid=88862: Thu Nov 28 11:48:46 2024 00:18:17.409 write: IOPS=462, BW=116MiB/s (121MB/s)(1170MiB/10124msec); 0 zone resets 00:18:17.409 slat (usec): min=23, max=14771, avg=2131.28, stdev=3749.03 00:18:17.409 clat (msec): min=17, max=270, avg=136.26, stdev=32.06 00:18:17.409 lat (msec): min=17, max=270, avg=138.39, stdev=32.36 00:18:17.409 clat percentiles (msec): 00:18:17.409 | 1.00th=[ 56], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 138], 00:18:17.409 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 153], 00:18:17.409 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 159], 00:18:17.409 | 99.00th=[ 161], 99.50th=[ 213], 99.90th=[ 262], 99.95th=[ 262], 00:18:17.409 | 99.99th=[ 271] 00:18:17.409 bw ( KiB/s): min=102912, max=215983, per=12.91%, avg=118195.00, stdev=32753.46, samples=20 00:18:17.409 iops : min= 402, max= 843, avg=461.65, stdev=127.84, samples=20 00:18:17.409 lat (msec) : 20=0.09%, 50=0.79%, 100=17.07%, 250=81.84%, 500=0.21% 00:18:17.409 cpu : usr=1.28%, sys=1.53%, ctx=5679, majf=0, minf=2 00:18:17.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:17.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.409 issued rwts: total=0,4680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.409 job10: (groupid=0, jobs=1): err= 0: pid=88864: Thu Nov 28 11:48:46 2024 00:18:17.409 write: IOPS=309, BW=77.3MiB/s (81.1MB/s)(786MiB/10165msec); 0 zone resets 00:18:17.409 slat (usec): min=18, max=44829, avg=3142.10, stdev=5504.44 00:18:17.409 clat (msec): min=46, max=368, avg=203.76, stdev=19.51 00:18:17.409 lat (msec): min=46, max=368, avg=206.90, stdev=19.07 00:18:17.409 clat percentiles (msec): 00:18:17.409 | 1.00th=[ 121], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 199], 00:18:17.409 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:18:17.409 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 213], 95.00th=[ 213], 00:18:17.409 | 99.00th=[ 268], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:18:17.409 | 99.99th=[ 368] 00:18:17.409 bw ( KiB/s): min=75776, max=83968, per=8.61%, avg=78847.55, stdev=1877.15, samples=20 00:18:17.409 iops : min= 296, max= 328, avg=307.95, stdev= 7.37, samples=20 00:18:17.409 lat (msec) : 50=0.13%, 100=0.64%, 250=98.03%, 500=1.21% 00:18:17.409 cpu : usr=0.92%, sys=0.95%, ctx=3679, majf=0, minf=1 00:18:17.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:17.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:17.409 issued rwts: total=0,3143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:17.409 00:18:17.409 Run status group 0 (all jobs): 00:18:17.409 WRITE: bw=894MiB/s (938MB/s), 53.3MiB/s-117MiB/s (55.8MB/s-123MB/s), io=9152MiB (9597MB), run=10113-10236msec 00:18:17.409 00:18:17.409 Disk stats (read/write): 00:18:17.409 nvme0n1: ios=50/4400, merge=0/0, ticks=120/1235934, in_queue=1236054, util=97.86% 00:18:17.409 nvme10n1: ios=49/6165, merge=0/0, ticks=140/1207462, in_queue=1207602, util=98.10% 00:18:17.409 nvme1n1: ios=44/9286, merge=0/0, ticks=60/1210172, in_queue=1210232, util=97.99% 00:18:17.409 nvme2n1: ios=33/6088, merge=0/0, ticks=40/1206968, in_queue=1207008, util=98.00% 00:18:17.409 nvme3n1: ios=30/8036, merge=0/0, ticks=131/1212594, in_queue=1212725, util=98.33% 00:18:17.409 nvme4n1: ios=13/4648, merge=0/0, ticks=21/1237968, in_queue=1237989, util=98.28% 00:18:17.409 nvme5n1: ios=0/4342, merge=0/0, ticks=0/1235655, in_queue=1235655, util=98.26% 00:18:17.409 nvme6n1: ios=0/4436, merge=0/0, ticks=0/1235333, in_queue=1235333, util=98.34% 00:18:17.409 nvme7n1: ios=0/9339, merge=0/0, ticks=0/1211161, in_queue=1211161, util=98.63% 00:18:17.409 nvme8n1: ios=0/9213, merge=0/0, ticks=0/1210892, in_queue=1210892, util=98.86% 00:18:17.409 nvme9n1: ios=0/6141, merge=0/0, ticks=0/1207330, in_queue=1207330, util=98.83% 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:17.409 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:17.409 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:17.409 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:17.409 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.409 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:17.410 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.410 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:17.669 rmmod nvme_tcp 00:18:17.669 rmmod nvme_fabrics 00:18:17.669 rmmod nvme_keyring 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 88175 ']' 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 88175 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 88175 ']' 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 88175 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88175 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.669 killing process with pid 88175 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88175' 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 88175 00:18:17.669 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 88175 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.249 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:18.250 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:18:18.516 00:18:18.516 real 0m49.501s 00:18:18.516 user 2m49.304s 00:18:18.516 sys 0m26.067s 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.516 ************************************ 00:18:18.516 END TEST nvmf_multiconnection 00:18:18.516 ************************************ 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.516 ************************************ 00:18:18.516 START TEST nvmf_initiator_timeout 00:18:18.516 ************************************ 00:18:18.516 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:18.902 * Looking for test storage... 00:18:18.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.902 --rc genhtml_branch_coverage=1 00:18:18.902 --rc genhtml_function_coverage=1 00:18:18.902 --rc genhtml_legend=1 00:18:18.902 --rc geninfo_all_blocks=1 00:18:18.902 --rc geninfo_unexecuted_blocks=1 00:18:18.902 00:18:18.902 ' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.902 --rc genhtml_branch_coverage=1 00:18:18.902 --rc genhtml_function_coverage=1 00:18:18.902 --rc genhtml_legend=1 00:18:18.902 --rc geninfo_all_blocks=1 00:18:18.902 --rc geninfo_unexecuted_blocks=1 00:18:18.902 00:18:18.902 ' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.902 --rc genhtml_branch_coverage=1 00:18:18.902 --rc genhtml_function_coverage=1 00:18:18.902 --rc genhtml_legend=1 00:18:18.902 --rc geninfo_all_blocks=1 00:18:18.902 --rc geninfo_unexecuted_blocks=1 00:18:18.902 00:18:18.902 ' 00:18:18.902 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.902 --rc genhtml_branch_coverage=1 00:18:18.903 --rc genhtml_function_coverage=1 00:18:18.903 --rc genhtml_legend=1 00:18:18.903 --rc geninfo_all_blocks=1 00:18:18.903 --rc geninfo_unexecuted_blocks=1 00:18:18.903 00:18:18.903 ' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:18.903 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:18.904 Cannot find device "nvmf_init_br" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:18.904 Cannot find device "nvmf_init_br2" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:18.904 Cannot find device "nvmf_tgt_br" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.904 Cannot find device "nvmf_tgt_br2" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:18.904 Cannot find device "nvmf_init_br" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:18.904 Cannot find device "nvmf_init_br2" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:18.904 Cannot find device "nvmf_tgt_br" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:18.904 Cannot find device "nvmf_tgt_br2" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:18.904 Cannot find device "nvmf_br" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:18.904 Cannot find device "nvmf_init_if" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:18.904 Cannot find device "nvmf_init_if2" 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.904 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.190 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.190 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.190 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.190 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.190 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:18:19.190 00:18:19.190 --- 10.0.0.3 ping statistics --- 00:18:19.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.190 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:18:19.190 00:18:19.190 --- 10.0.0.4 ping statistics --- 00:18:19.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.190 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:19.190 00:18:19.190 --- 10.0.0.1 ping statistics --- 00:18:19.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.190 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:18:19.190 00:18:19.190 --- 10.0.0.2 ping statistics --- 00:18:19.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.190 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=89280 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 89280 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 89280 ']' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.190 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:19.190 [2024-11-28 11:48:49.218026] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:19.190 [2024-11-28 11:48:49.218119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.450 [2024-11-28 11:48:49.345951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:19.450 [2024-11-28 11:48:49.367121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.450 [2024-11-28 11:48:49.421122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.450 [2024-11-28 11:48:49.421199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.451 [2024-11-28 11:48:49.421210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.451 [2024-11-28 11:48:49.421217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.451 [2024-11-28 11:48:49.421224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.451 [2024-11-28 11:48:49.422606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.451 [2024-11-28 11:48:49.422749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.451 [2024-11-28 11:48:49.422934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.451 [2024-11-28 11:48:49.422941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.451 [2024-11-28 11:48:49.494554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 Malloc0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 Delay0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 [2024-11-28 11:48:50.302596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 [2024-11-28 11:48:50.331226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:20.389 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=89349 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:22.924 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:22.924 [global] 00:18:22.924 thread=1 00:18:22.924 invalidate=1 00:18:22.924 rw=write 00:18:22.924 time_based=1 00:18:22.924 runtime=60 00:18:22.924 ioengine=libaio 00:18:22.924 direct=1 00:18:22.924 bs=4096 00:18:22.924 iodepth=1 00:18:22.924 norandommap=0 00:18:22.924 numjobs=1 00:18:22.924 00:18:22.924 verify_dump=1 00:18:22.924 verify_backlog=512 00:18:22.924 verify_state_save=0 00:18:22.924 do_verify=1 00:18:22.924 verify=crc32c-intel 00:18:22.924 [job0] 00:18:22.924 filename=/dev/nvme0n1 00:18:22.924 Could not set queue depth (nvme0n1) 00:18:22.924 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.924 fio-3.35 00:18:22.924 Starting 1 thread 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 true 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 true 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 true 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:25.453 true 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.453 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:28.736 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:28.736 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.736 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 true 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 true 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 true 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 true 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:28.737 11:48:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 89349 00:19:24.967 00:19:24.967 job0: (groupid=0, jobs=1): err= 0: pid=89371: Thu Nov 28 11:49:52 2024 00:19:24.967 read: IOPS=665, BW=2662KiB/s (2726kB/s)(156MiB/60001msec) 00:19:24.967 slat (usec): min=10, max=108, avg=15.63, stdev= 6.32 00:19:24.967 clat (usec): min=159, max=2059, avg=255.40, stdev=36.79 00:19:24.967 lat (usec): min=178, max=2077, avg=271.03, stdev=37.64 00:19:24.967 clat percentiles (usec): 00:19:24.967 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:19:24.967 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:19:24.967 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 318], 00:19:24.967 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 441], 99.95th=[ 465], 00:19:24.967 | 99.99th=[ 955] 00:19:24.967 write: IOPS=667, BW=2672KiB/s (2736kB/s)(157MiB/60001msec); 0 zone resets 00:19:24.967 slat (usec): min=13, max=13649, avg=24.19, stdev=82.09 00:19:24.967 clat (usec): min=41, max=40428k, avg=1199.45, stdev=201950.20 00:19:24.967 lat (usec): min=139, max=40428k, avg=1223.63, stdev=201950.25 00:19:24.967 clat percentiles (usec): 00:19:24.967 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 163], 00:19:24.967 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:19:24.967 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 235], 95.00th=[ 251], 00:19:24.967 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 392], 99.95th=[ 433], 00:19:24.967 | 99.99th=[ 807] 00:19:24.967 bw ( KiB/s): min= 4736, max= 9328, per=100.00%, avg=8056.56, stdev=842.17, samples=39 00:19:24.967 iops : min= 1184, max= 2332, avg=2014.10, stdev=210.54, samples=39 00:19:24.967 lat (usec) : 50=0.01%, 100=0.01%, 250=72.09%, 500=27.86%, 750=0.02% 00:19:24.967 lat (usec) : 1000=0.01% 00:19:24.967 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:24.967 cpu : usr=0.51%, sys=2.02%, ctx=80044, majf=0, minf=5 00:19:24.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.967 issued rwts: total=39936,40074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.967 00:19:24.967 Run status group 0 (all jobs): 00:19:24.968 READ: bw=2662KiB/s (2726kB/s), 2662KiB/s-2662KiB/s (2726kB/s-2726kB/s), io=156MiB (164MB), run=60001-60001msec 00:19:24.968 WRITE: bw=2672KiB/s (2736kB/s), 2672KiB/s-2672KiB/s (2736kB/s-2736kB/s), io=157MiB (164MB), run=60001-60001msec 00:19:24.968 00:19:24.968 Disk stats (read/write): 00:19:24.968 nvme0n1: ios=39887/39936, merge=0/0, ticks=10588/8164, in_queue=18752, util=99.86% 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.968 nvmf hotplug test: fio successful as expected 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.968 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.968 rmmod nvme_tcp 00:19:24.968 rmmod nvme_fabrics 00:19:24.968 rmmod nvme_keyring 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 89280 ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 89280 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 89280 ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 89280 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89280 00:19:24.968 killing process with pid 89280 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89280' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 89280 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 89280 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:19:24.968 00:19:24.968 real 1m5.029s 00:19:24.968 user 4m0.059s 00:19:24.968 sys 0m15.620s 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:24.968 ************************************ 00:19:24.968 END TEST nvmf_initiator_timeout 00:19:24.968 ************************************ 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.968 ************************************ 00:19:24.968 START TEST nvmf_nsid 00:19:24.968 ************************************ 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:24.968 * Looking for test storage... 00:19:24.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:24.968 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.969 --rc genhtml_branch_coverage=1 00:19:24.969 --rc genhtml_function_coverage=1 00:19:24.969 --rc genhtml_legend=1 00:19:24.969 --rc geninfo_all_blocks=1 00:19:24.969 --rc geninfo_unexecuted_blocks=1 00:19:24.969 00:19:24.969 ' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.969 --rc genhtml_branch_coverage=1 00:19:24.969 --rc genhtml_function_coverage=1 00:19:24.969 --rc genhtml_legend=1 00:19:24.969 --rc geninfo_all_blocks=1 00:19:24.969 --rc geninfo_unexecuted_blocks=1 00:19:24.969 00:19:24.969 ' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.969 --rc genhtml_branch_coverage=1 00:19:24.969 --rc genhtml_function_coverage=1 00:19:24.969 --rc genhtml_legend=1 00:19:24.969 --rc geninfo_all_blocks=1 00:19:24.969 --rc geninfo_unexecuted_blocks=1 00:19:24.969 00:19:24.969 ' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.969 --rc genhtml_branch_coverage=1 00:19:24.969 --rc genhtml_function_coverage=1 00:19:24.969 --rc genhtml_legend=1 00:19:24.969 --rc geninfo_all_blocks=1 00:19:24.969 --rc geninfo_unexecuted_blocks=1 00:19:24.969 00:19:24.969 ' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:24.969 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:24.970 Cannot find device "nvmf_init_br" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:24.970 Cannot find device "nvmf_init_br2" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:24.970 Cannot find device "nvmf_tgt_br" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.970 Cannot find device "nvmf_tgt_br2" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:24.970 Cannot find device "nvmf_init_br" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:24.970 Cannot find device "nvmf_init_br2" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:24.970 Cannot find device "nvmf_tgt_br" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:24.970 Cannot find device "nvmf_tgt_br2" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:24.970 Cannot find device "nvmf_br" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:24.970 Cannot find device "nvmf_init_if" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:24.970 Cannot find device "nvmf_init_if2" 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:24.970 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:24.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:24.970 00:19:24.970 --- 10.0.0.3 ping statistics --- 00:19:24.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.970 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:24.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:24.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:19:24.970 00:19:24.970 --- 10.0.0.4 ping statistics --- 00:19:24.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.970 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:24.970 00:19:24.970 --- 10.0.0.1 ping statistics --- 00:19:24.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.970 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:24.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:24.970 00:19:24.970 --- 10.0.0.2 ping statistics --- 00:19:24.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.970 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.970 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=90228 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 90228 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 90228 ']' 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 [2024-11-28 11:49:54.353359] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:24.971 [2024-11-28 11:49:54.353471] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.971 [2024-11-28 11:49:54.481028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.971 [2024-11-28 11:49:54.506531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.971 [2024-11-28 11:49:54.552089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.971 [2024-11-28 11:49:54.552163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.971 [2024-11-28 11:49:54.552173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.971 [2024-11-28 11:49:54.552180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.971 [2024-11-28 11:49:54.552187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.971 [2024-11-28 11:49:54.552595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.971 [2024-11-28 11:49:54.626700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=90252 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0c7298a0-d91e-4d68-aae4-e162402c9198 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8cb9fe41-f9a7-48b4-9f1b-a9330f28f839 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e7a13ca7-a8b7-4676-94ed-c5ce6b7e9ea2 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 null0 00:19:24.971 null1 00:19:24.971 null2 00:19:24.971 [2024-11-28 11:49:54.800896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.971 [2024-11-28 11:49:54.819003] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:24.971 [2024-11-28 11:49:54.819089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90252 ] 00:19:24.971 [2024-11-28 11:49:54.825043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 90252 /var/tmp/tgt2.sock 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 90252 ']' 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.971 11:49:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 [2024-11-28 11:49:54.946700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:24.971 [2024-11-28 11:49:54.979208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.971 [2024-11-28 11:49:55.027423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.229 [2024-11-28 11:49:55.118270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.487 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.487 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:25.487 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:25.746 [2024-11-28 11:49:55.798707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.746 [2024-11-28 11:49:55.814779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:25.746 nvme0n1 nvme0n2 00:19:25.746 nvme1n1 00:19:25.746 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:25.746 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:25.746 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:26.004 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:26.004 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:26.004 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:26.004 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:26.005 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:19:26.005 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:26.005 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:19:26.005 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0c7298a0-d91e-4d68-aae4-e162402c9198 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:26.942 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0c7298a0d91e4d68aae4e162402c9198 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0C7298A0D91E4D68AAE4E162402C9198 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0C7298A0D91E4D68AAE4E162402C9198 == \0\C\7\2\9\8\A\0\D\9\1\E\4\D\6\8\A\A\E\4\E\1\6\2\4\0\2\C\9\1\9\8 ]] 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8cb9fe41-f9a7-48b4-9f1b-a9330f28f839 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8cb9fe41f9a748b49f1ba9330f28f839 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8CB9FE41F9A748B49F1BA9330F28F839 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8CB9FE41F9A748B49F1BA9330F28F839 == \8\C\B\9\F\E\4\1\F\9\A\7\4\8\B\4\9\F\1\B\A\9\3\3\0\F\2\8\F\8\3\9 ]] 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e7a13ca7-a8b7-4676-94ed-c5ce6b7e9ea2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e7a13ca7a8b7467694edc5ce6b7e9ea2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E7A13CA7A8B7467694EDC5CE6B7E9EA2 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E7A13CA7A8B7467694EDC5CE6B7E9EA2 == \E\7\A\1\3\C\A\7\A\8\B\7\4\6\7\6\9\4\E\D\C\5\C\E\6\B\7\E\9\E\A\2 ]] 00:19:27.201 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:27.460 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:27.460 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:27.460 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 90252 00:19:27.460 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 90252 ']' 00:19:27.460 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 90252 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90252 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90252' 00:19:27.461 killing process with pid 90252 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 90252 00:19:27.461 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 90252 00:19:28.029 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:28.029 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.029 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.029 rmmod nvme_tcp 00:19:28.029 rmmod nvme_fabrics 00:19:28.029 rmmod nvme_keyring 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 90228 ']' 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 90228 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 90228 ']' 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 90228 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90228 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.029 killing process with pid 90228 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90228' 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 90228 00:19:28.029 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 90228 00:19:28.288 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.288 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:28.289 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:28.547 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:28.548 00:19:28.548 real 0m4.900s 00:19:28.548 user 0m7.243s 00:19:28.548 sys 0m1.829s 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:28.548 ************************************ 00:19:28.548 END TEST nvmf_nsid 00:19:28.548 ************************************ 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:28.548 00:19:28.548 real 7m10.447s 00:19:28.548 user 17m40.541s 00:19:28.548 sys 1m56.071s 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.548 11:49:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.548 ************************************ 00:19:28.548 END TEST nvmf_target_extra 00:19:28.548 ************************************ 00:19:28.548 11:49:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:28.548 11:49:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.548 11:49:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.548 11:49:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.548 ************************************ 00:19:28.548 START TEST nvmf_host 00:19:28.548 ************************************ 00:19:28.548 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:28.807 * Looking for test storage... 00:19:28.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.807 --rc genhtml_branch_coverage=1 00:19:28.807 --rc genhtml_function_coverage=1 00:19:28.807 --rc genhtml_legend=1 00:19:28.807 --rc geninfo_all_blocks=1 00:19:28.807 --rc geninfo_unexecuted_blocks=1 00:19:28.807 00:19:28.807 ' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.807 --rc genhtml_branch_coverage=1 00:19:28.807 --rc genhtml_function_coverage=1 00:19:28.807 --rc genhtml_legend=1 00:19:28.807 --rc geninfo_all_blocks=1 00:19:28.807 --rc geninfo_unexecuted_blocks=1 00:19:28.807 00:19:28.807 ' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.807 --rc genhtml_branch_coverage=1 00:19:28.807 --rc genhtml_function_coverage=1 00:19:28.807 --rc genhtml_legend=1 00:19:28.807 --rc geninfo_all_blocks=1 00:19:28.807 --rc geninfo_unexecuted_blocks=1 00:19:28.807 00:19:28.807 ' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.807 --rc genhtml_branch_coverage=1 00:19:28.807 --rc genhtml_function_coverage=1 00:19:28.807 --rc genhtml_legend=1 00:19:28.807 --rc geninfo_all_blocks=1 00:19:28.807 --rc geninfo_unexecuted_blocks=1 00:19:28.807 00:19:28.807 ' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.807 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.808 ************************************ 00:19:28.808 START TEST nvmf_identify 00:19:28.808 ************************************ 00:19:28.808 11:49:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:29.067 * Looking for test storage... 00:19:29.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:29.067 11:49:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.067 11:49:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.067 11:49:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:29.067 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.068 --rc genhtml_branch_coverage=1 00:19:29.068 --rc genhtml_function_coverage=1 00:19:29.068 --rc genhtml_legend=1 00:19:29.068 --rc geninfo_all_blocks=1 00:19:29.068 --rc geninfo_unexecuted_blocks=1 00:19:29.068 00:19:29.068 ' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.068 --rc genhtml_branch_coverage=1 00:19:29.068 --rc genhtml_function_coverage=1 00:19:29.068 --rc genhtml_legend=1 00:19:29.068 --rc geninfo_all_blocks=1 00:19:29.068 --rc geninfo_unexecuted_blocks=1 00:19:29.068 00:19:29.068 ' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.068 --rc genhtml_branch_coverage=1 00:19:29.068 --rc genhtml_function_coverage=1 00:19:29.068 --rc genhtml_legend=1 00:19:29.068 --rc geninfo_all_blocks=1 00:19:29.068 --rc geninfo_unexecuted_blocks=1 00:19:29.068 00:19:29.068 ' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.068 --rc genhtml_branch_coverage=1 00:19:29.068 --rc genhtml_function_coverage=1 00:19:29.068 --rc genhtml_legend=1 00:19:29.068 --rc geninfo_all_blocks=1 00:19:29.068 --rc geninfo_unexecuted_blocks=1 00:19:29.068 00:19:29.068 ' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.068 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:29.069 Cannot find device "nvmf_init_br" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:29.069 Cannot find device "nvmf_init_br2" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:29.069 Cannot find device "nvmf_tgt_br" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.069 Cannot find device "nvmf_tgt_br2" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:29.069 Cannot find device "nvmf_init_br" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:29.069 Cannot find device "nvmf_init_br2" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:29.069 Cannot find device "nvmf_tgt_br" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:29.069 Cannot find device "nvmf_tgt_br2" 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:19:29.069 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:29.327 Cannot find device "nvmf_br" 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:29.327 Cannot find device "nvmf_init_if" 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:29.327 Cannot find device "nvmf_init_if2" 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:29.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:29.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:19:29.327 00:19:29.327 --- 10.0.0.3 ping statistics --- 00:19:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.327 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:29.327 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:29.327 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:29.327 00:19:29.327 --- 10.0.0.4 ping statistics --- 00:19:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.327 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:29.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:29.327 00:19:29.327 --- 10.0.0.1 ping statistics --- 00:19:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.327 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:29.327 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:29.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:29.591 00:19:29.591 --- 10.0.0.2 ping statistics --- 00:19:29.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.591 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=90614 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 90614 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 90614 ']' 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.591 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.591 [2024-11-28 11:49:59.550067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:29.591 [2024-11-28 11:49:59.550162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.591 [2024-11-28 11:49:59.678477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:29.591 [2024-11-28 11:49:59.711186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.874 [2024-11-28 11:49:59.765265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.874 [2024-11-28 11:49:59.765620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.874 [2024-11-28 11:49:59.765719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.874 [2024-11-28 11:49:59.765806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.874 [2024-11-28 11:49:59.765886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.874 [2024-11-28 11:49:59.767474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.874 [2024-11-28 11:49:59.767542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.874 [2024-11-28 11:49:59.767683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.874 [2024-11-28 11:49:59.767690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.874 [2024-11-28 11:49:59.843579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.874 [2024-11-28 11:49:59.932214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.874 11:49:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 Malloc0 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 [2024-11-28 11:50:00.058216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.164 [ 00:19:30.164 { 00:19:30.164 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:30.164 "subtype": "Discovery", 00:19:30.164 "listen_addresses": [ 00:19:30.164 { 00:19:30.164 "trtype": "TCP", 00:19:30.164 "adrfam": "IPv4", 00:19:30.164 "traddr": "10.0.0.3", 00:19:30.164 "trsvcid": "4420" 00:19:30.164 } 00:19:30.164 ], 00:19:30.164 "allow_any_host": true, 00:19:30.164 "hosts": [] 00:19:30.164 }, 00:19:30.164 { 00:19:30.164 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.164 "subtype": "NVMe", 00:19:30.164 "listen_addresses": [ 00:19:30.164 { 00:19:30.164 "trtype": "TCP", 00:19:30.164 "adrfam": "IPv4", 00:19:30.164 "traddr": "10.0.0.3", 00:19:30.164 "trsvcid": "4420" 00:19:30.164 } 00:19:30.164 ], 00:19:30.164 "allow_any_host": true, 00:19:30.164 "hosts": [], 00:19:30.164 "serial_number": "SPDK00000000000001", 00:19:30.164 "model_number": "SPDK bdev Controller", 00:19:30.164 "max_namespaces": 32, 00:19:30.164 "min_cntlid": 1, 00:19:30.164 "max_cntlid": 65519, 00:19:30.164 "namespaces": [ 00:19:30.164 { 00:19:30.164 "nsid": 1, 00:19:30.164 "bdev_name": "Malloc0", 00:19:30.164 "name": "Malloc0", 00:19:30.164 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:30.164 "eui64": "ABCDEF0123456789", 00:19:30.164 "uuid": "90ce9792-f5da-48f7-8e71-e513a8dfb0d8" 00:19:30.164 } 00:19:30.164 ] 00:19:30.164 } 00:19:30.164 ] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.164 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:30.164 [2024-11-28 11:50:00.117243] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:30.164 [2024-11-28 11:50:00.117362] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90643 ] 00:19:30.164 [2024-11-28 11:50:00.242268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.439 [2024-11-28 11:50:00.279030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:30.439 [2024-11-28 11:50:00.279121] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:30.439 [2024-11-28 11:50:00.279127] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:30.439 [2024-11-28 11:50:00.279155] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:30.439 [2024-11-28 11:50:00.279167] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:30.439 [2024-11-28 11:50:00.279621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:30.439 [2024-11-28 11:50:00.279700] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x141da10 0 00:19:30.439 [2024-11-28 11:50:00.292333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:30.439 [2024-11-28 11:50:00.292374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:30.439 [2024-11-28 11:50:00.292389] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:30.439 [2024-11-28 11:50:00.292393] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:30.439 [2024-11-28 11:50:00.292434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.292441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.292445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.439 [2024-11-28 11:50:00.292461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:30.439 [2024-11-28 11:50:00.292490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.439 [2024-11-28 11:50:00.299360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.439 [2024-11-28 11:50:00.299382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.439 [2024-11-28 11:50:00.299402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.439 [2024-11-28 11:50:00.299422] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.439 [2024-11-28 11:50:00.299429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:30.439 [2024-11-28 11:50:00.299435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:30.439 [2024-11-28 11:50:00.299456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.439 [2024-11-28 11:50:00.299474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-11-28 11:50:00.299501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.439 [2024-11-28 11:50:00.299566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.439 [2024-11-28 11:50:00.299572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.439 [2024-11-28 11:50:00.299576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.439 [2024-11-28 11:50:00.299585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:30.439 [2024-11-28 11:50:00.299592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:30.439 [2024-11-28 11:50:00.299598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.439 [2024-11-28 11:50:00.299613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-11-28 11:50:00.299659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.439 [2024-11-28 11:50:00.299734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.439 [2024-11-28 11:50:00.299740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.439 [2024-11-28 11:50:00.299744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.439 [2024-11-28 11:50:00.299754] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:30.439 [2024-11-28 11:50:00.299762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.439 [2024-11-28 11:50:00.299769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.439 [2024-11-28 11:50:00.299777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.439 [2024-11-28 11:50:00.299784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-11-28 11:50:00.299800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.299864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.299871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.299875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.299879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.299885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.440 [2024-11-28 11:50:00.299895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.299899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.299903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.299910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-11-28 11:50:00.299925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.299984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.299991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.299994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.299998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.300004] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:30.440 [2024-11-28 11:50:00.300009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:30.440 [2024-11-28 11:50:00.300017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.440 [2024-11-28 11:50:00.300123] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:30.440 [2024-11-28 11:50:00.300128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.440 [2024-11-28 11:50:00.300137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-11-28 11:50:00.300169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.300237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.300243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.300247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.300256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.440 [2024-11-28 11:50:00.300265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-11-28 11:50:00.300296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.300398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.300406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.300410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.300419] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.440 [2024-11-28 11:50:00.300424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:30.440 [2024-11-28 11:50:00.300444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-11-28 11:50:00.300486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.300589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.440 [2024-11-28 11:50:00.300596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.440 [2024-11-28 11:50:00.300600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141da10): datao=0, datal=4096, cccid=0 00:19:30.440 [2024-11-28 11:50:00.300609] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1475180) on tqpair(0x141da10): expected_datao=0, payload_size=4096 00:19:30.440 [2024-11-28 11:50:00.300614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300622] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300627] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.300642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.300645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.300658] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:30.440 [2024-11-28 11:50:00.300664] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:30.440 [2024-11-28 11:50:00.300669] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:30.440 [2024-11-28 11:50:00.300679] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:30.440 [2024-11-28 11:50:00.300685] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:30.440 [2024-11-28 11:50:00.300690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.440 [2024-11-28 11:50:00.300762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.440 [2024-11-28 11:50:00.300821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.440 [2024-11-28 11:50:00.300827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.440 [2024-11-28 11:50:00.300831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.440 [2024-11-28 11:50:00.300844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.440 [2024-11-28 11:50:00.300864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.440 [2024-11-28 11:50:00.300883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.440 [2024-11-28 11:50:00.300901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.440 [2024-11-28 11:50:00.300914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.440 [2024-11-28 11:50:00.300919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.440 [2024-11-28 11:50:00.300934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.440 [2024-11-28 11:50:00.300937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.300944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-11-28 11:50:00.300967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475180, cid 0, qid 0 00:19:30.441 [2024-11-28 11:50:00.300975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475300, cid 1, qid 0 00:19:30.441 [2024-11-28 11:50:00.300979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475480, cid 2, qid 0 00:19:30.441 [2024-11-28 11:50:00.300984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.441 [2024-11-28 11:50:00.300988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475780, cid 4, qid 0 00:19:30.441 [2024-11-28 11:50:00.301096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475780) on tqpair=0x141da10 00:19:30.441 [2024-11-28 11:50:00.301116] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:30.441 [2024-11-28 11:50:00.301122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:30.441 [2024-11-28 11:50:00.301132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.301144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-11-28 11:50:00.301160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475780, cid 4, qid 0 00:19:30.441 [2024-11-28 11:50:00.301228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.441 [2024-11-28 11:50:00.301234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.441 [2024-11-28 11:50:00.301237] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301241] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141da10): datao=0, datal=4096, cccid=4 00:19:30.441 [2024-11-28 11:50:00.301245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1475780) on tqpair(0x141da10): expected_datao=0, payload_size=4096 00:19:30.441 [2024-11-28 11:50:00.301250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301257] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301260] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475780) on tqpair=0x141da10 00:19:30.441 [2024-11-28 11:50:00.301295] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:30.441 [2024-11-28 11:50:00.301337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.301367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-11-28 11:50:00.301375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.301389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.441 [2024-11-28 11:50:00.301415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475780, cid 4, qid 0 00:19:30.441 [2024-11-28 11:50:00.301423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475900, cid 5, qid 0 00:19:30.441 [2024-11-28 11:50:00.301543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.441 [2024-11-28 11:50:00.301550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.441 [2024-11-28 11:50:00.301553] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301557] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141da10): datao=0, datal=1024, cccid=4 00:19:30.441 [2024-11-28 11:50:00.301561] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1475780) on tqpair(0x141da10): expected_datao=0, payload_size=1024 00:19:30.441 [2024-11-28 11:50:00.301566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301572] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301576] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475900) on tqpair=0x141da10 00:19:30.441 [2024-11-28 11:50:00.301612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475780) on tqpair=0x141da10 00:19:30.441 [2024-11-28 11:50:00.301638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.301649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-11-28 11:50:00.301671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475780, cid 4, qid 0 00:19:30.441 [2024-11-28 11:50:00.301754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.441 [2024-11-28 11:50:00.301761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.441 [2024-11-28 11:50:00.301764] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301768] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141da10): datao=0, datal=3072, cccid=4 00:19:30.441 [2024-11-28 11:50:00.301773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1475780) on tqpair(0x141da10): expected_datao=0, payload_size=3072 00:19:30.441 [2024-11-28 11:50:00.301777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301784] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301788] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475780) on tqpair=0x141da10 00:19:30.441 [2024-11-28 11:50:00.301818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x141da10) 00:19:30.441 [2024-11-28 11:50:00.301829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-11-28 11:50:00.301850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475780, cid 4, qid 0 00:19:30.441 [2024-11-28 11:50:00.301930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.441 [2024-11-28 11:50:00.301936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.441 [2024-11-28 11:50:00.301940] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301944] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x141da10): datao=0, datal=8, cccid=4 00:19:30.441 [2024-11-28 11:50:00.301948] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1475780) on tqpair(0x141da10): expected_datao=0, payload_size=8 00:19:30.441 [2024-11-28 11:50:00.301952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301959] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.441 [2024-11-28 11:50:00.301984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.441 [2024-11-28 11:50:00.301988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.441 [2024-11-28 11:50:00.301992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475780) on tqpair=0x141da10 00:19:30.441 ===================================================== 00:19:30.441 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:30.441 ===================================================== 00:19:30.441 Controller Capabilities/Features 00:19:30.441 ================================ 00:19:30.441 Vendor ID: 0000 00:19:30.441 Subsystem Vendor ID: 0000 00:19:30.441 Serial Number: .................... 00:19:30.441 Model Number: ........................................ 00:19:30.442 Firmware Version: 25.01 00:19:30.442 Recommended Arb Burst: 0 00:19:30.442 IEEE OUI Identifier: 00 00 00 00:19:30.442 Multi-path I/O 00:19:30.442 May have multiple subsystem ports: No 00:19:30.442 May have multiple controllers: No 00:19:30.442 Associated with SR-IOV VF: No 00:19:30.442 Max Data Transfer Size: 131072 00:19:30.442 Max Number of Namespaces: 0 00:19:30.442 Max Number of I/O Queues: 1024 00:19:30.442 NVMe Specification Version (VS): 1.3 00:19:30.442 NVMe Specification Version (Identify): 1.3 00:19:30.442 Maximum Queue Entries: 128 00:19:30.442 Contiguous Queues Required: Yes 00:19:30.442 Arbitration Mechanisms Supported 00:19:30.442 Weighted Round Robin: Not Supported 00:19:30.442 Vendor Specific: Not Supported 00:19:30.442 Reset Timeout: 15000 ms 00:19:30.442 Doorbell Stride: 4 bytes 00:19:30.442 NVM Subsystem Reset: Not Supported 00:19:30.442 Command Sets Supported 00:19:30.442 NVM Command Set: Supported 00:19:30.442 Boot Partition: Not Supported 00:19:30.442 Memory Page Size Minimum: 4096 bytes 00:19:30.442 Memory Page Size Maximum: 4096 bytes 00:19:30.442 Persistent Memory Region: Not Supported 00:19:30.442 Optional Asynchronous Events Supported 00:19:30.442 Namespace Attribute Notices: Not Supported 00:19:30.442 Firmware Activation Notices: Not Supported 00:19:30.442 ANA Change Notices: Not Supported 00:19:30.442 PLE Aggregate Log Change Notices: Not Supported 00:19:30.442 LBA Status Info Alert Notices: Not Supported 00:19:30.442 EGE Aggregate Log Change Notices: Not Supported 00:19:30.442 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.442 Zone Descriptor Change Notices: Not Supported 00:19:30.442 Discovery Log Change Notices: Supported 00:19:30.442 Controller Attributes 00:19:30.442 128-bit Host Identifier: Not Supported 00:19:30.442 Non-Operational Permissive Mode: Not Supported 00:19:30.442 NVM Sets: Not Supported 00:19:30.442 Read Recovery Levels: Not Supported 00:19:30.442 Endurance Groups: Not Supported 00:19:30.442 Predictable Latency Mode: Not Supported 00:19:30.442 Traffic Based Keep ALive: Not Supported 00:19:30.442 Namespace Granularity: Not Supported 00:19:30.442 SQ Associations: Not Supported 00:19:30.442 UUID List: Not Supported 00:19:30.442 Multi-Domain Subsystem: Not Supported 00:19:30.442 Fixed Capacity Management: Not Supported 00:19:30.442 Variable Capacity Management: Not Supported 00:19:30.442 Delete Endurance Group: Not Supported 00:19:30.442 Delete NVM Set: Not Supported 00:19:30.442 Extended LBA Formats Supported: Not Supported 00:19:30.442 Flexible Data Placement Supported: Not Supported 00:19:30.442 00:19:30.442 Controller Memory Buffer Support 00:19:30.442 ================================ 00:19:30.442 Supported: No 00:19:30.442 00:19:30.442 Persistent Memory Region Support 00:19:30.442 ================================ 00:19:30.442 Supported: No 00:19:30.442 00:19:30.442 Admin Command Set Attributes 00:19:30.442 ============================ 00:19:30.442 Security Send/Receive: Not Supported 00:19:30.442 Format NVM: Not Supported 00:19:30.442 Firmware Activate/Download: Not Supported 00:19:30.442 Namespace Management: Not Supported 00:19:30.442 Device Self-Test: Not Supported 00:19:30.442 Directives: Not Supported 00:19:30.442 NVMe-MI: Not Supported 00:19:30.442 Virtualization Management: Not Supported 00:19:30.442 Doorbell Buffer Config: Not Supported 00:19:30.442 Get LBA Status Capability: Not Supported 00:19:30.442 Command & Feature Lockdown Capability: Not Supported 00:19:30.442 Abort Command Limit: 1 00:19:30.442 Async Event Request Limit: 4 00:19:30.442 Number of Firmware Slots: N/A 00:19:30.442 Firmware Slot 1 Read-Only: N/A 00:19:30.442 Firmware Activation Without Reset: N/A 00:19:30.442 Multiple Update Detection Support: N/A 00:19:30.442 Firmware Update Granularity: No Information Provided 00:19:30.442 Per-Namespace SMART Log: No 00:19:30.442 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.442 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:30.442 Command Effects Log Page: Not Supported 00:19:30.442 Get Log Page Extended Data: Supported 00:19:30.442 Telemetry Log Pages: Not Supported 00:19:30.442 Persistent Event Log Pages: Not Supported 00:19:30.442 Supported Log Pages Log Page: May Support 00:19:30.442 Commands Supported & Effects Log Page: Not Supported 00:19:30.442 Feature Identifiers & Effects Log Page:May Support 00:19:30.442 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.442 Data Area 4 for Telemetry Log: Not Supported 00:19:30.442 Error Log Page Entries Supported: 128 00:19:30.442 Keep Alive: Not Supported 00:19:30.442 00:19:30.442 NVM Command Set Attributes 00:19:30.442 ========================== 00:19:30.442 Submission Queue Entry Size 00:19:30.442 Max: 1 00:19:30.442 Min: 1 00:19:30.442 Completion Queue Entry Size 00:19:30.442 Max: 1 00:19:30.442 Min: 1 00:19:30.442 Number of Namespaces: 0 00:19:30.442 Compare Command: Not Supported 00:19:30.442 Write Uncorrectable Command: Not Supported 00:19:30.442 Dataset Management Command: Not Supported 00:19:30.442 Write Zeroes Command: Not Supported 00:19:30.442 Set Features Save Field: Not Supported 00:19:30.442 Reservations: Not Supported 00:19:30.442 Timestamp: Not Supported 00:19:30.442 Copy: Not Supported 00:19:30.442 Volatile Write Cache: Not Present 00:19:30.442 Atomic Write Unit (Normal): 1 00:19:30.442 Atomic Write Unit (PFail): 1 00:19:30.442 Atomic Compare & Write Unit: 1 00:19:30.442 Fused Compare & Write: Supported 00:19:30.442 Scatter-Gather List 00:19:30.442 SGL Command Set: Supported 00:19:30.442 SGL Keyed: Supported 00:19:30.442 SGL Bit Bucket Descriptor: Not Supported 00:19:30.442 SGL Metadata Pointer: Not Supported 00:19:30.442 Oversized SGL: Not Supported 00:19:30.442 SGL Metadata Address: Not Supported 00:19:30.442 SGL Offset: Supported 00:19:30.442 Transport SGL Data Block: Not Supported 00:19:30.442 Replay Protected Memory Block: Not Supported 00:19:30.442 00:19:30.442 Firmware Slot Information 00:19:30.442 ========================= 00:19:30.442 Active slot: 0 00:19:30.442 00:19:30.442 00:19:30.442 Error Log 00:19:30.442 ========= 00:19:30.442 00:19:30.442 Active Namespaces 00:19:30.442 ================= 00:19:30.442 Discovery Log Page 00:19:30.442 ================== 00:19:30.442 Generation Counter: 2 00:19:30.442 Number of Records: 2 00:19:30.442 Record Format: 0 00:19:30.442 00:19:30.442 Discovery Log Entry 0 00:19:30.442 ---------------------- 00:19:30.442 Transport Type: 3 (TCP) 00:19:30.442 Address Family: 1 (IPv4) 00:19:30.442 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:30.442 Entry Flags: 00:19:30.442 Duplicate Returned Information: 1 00:19:30.442 Explicit Persistent Connection Support for Discovery: 1 00:19:30.442 Transport Requirements: 00:19:30.442 Secure Channel: Not Required 00:19:30.442 Port ID: 0 (0x0000) 00:19:30.442 Controller ID: 65535 (0xffff) 00:19:30.442 Admin Max SQ Size: 128 00:19:30.442 Transport Service Identifier: 4420 00:19:30.442 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:30.442 Transport Address: 10.0.0.3 00:19:30.442 Discovery Log Entry 1 00:19:30.442 ---------------------- 00:19:30.442 Transport Type: 3 (TCP) 00:19:30.442 Address Family: 1 (IPv4) 00:19:30.442 Subsystem Type: 2 (NVM Subsystem) 00:19:30.442 Entry Flags: 00:19:30.442 Duplicate Returned Information: 0 00:19:30.442 Explicit Persistent Connection Support for Discovery: 0 00:19:30.442 Transport Requirements: 00:19:30.442 Secure Channel: Not Required 00:19:30.442 Port ID: 0 (0x0000) 00:19:30.442 Controller ID: 65535 (0xffff) 00:19:30.442 Admin Max SQ Size: 128 00:19:30.442 Transport Service Identifier: 4420 00:19:30.442 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:30.442 Transport Address: 10.0.0.3 [2024-11-28 11:50:00.302113] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:30.442 [2024-11-28 11:50:00.302128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475180) on tqpair=0x141da10 00:19:30.442 [2024-11-28 11:50:00.302135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.442 [2024-11-28 11:50:00.302141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475300) on tqpair=0x141da10 00:19:30.442 [2024-11-28 11:50:00.302146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.442 [2024-11-28 11:50:00.302151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475480) on tqpair=0x141da10 00:19:30.442 [2024-11-28 11:50:00.302155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.442 [2024-11-28 11:50:00.302160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.443 [2024-11-28 11:50:00.302182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302490] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:30.443 [2024-11-28 11:50:00.302495] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:30.443 [2024-11-28 11:50:00.302506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.302904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.302912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.302919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.302934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.302989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.302995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.302999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.303013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.303028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.303043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.303092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.303098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.303102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.303116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.303131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.303146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.303218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.303224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.303227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.303241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.303256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-11-28 11:50:00.303271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.443 [2024-11-28 11:50:00.303323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.443 [2024-11-28 11:50:00.303330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.443 [2024-11-28 11:50:00.303333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.303337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.443 [2024-11-28 11:50:00.303347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.307404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.443 [2024-11-28 11:50:00.307426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x141da10) 00:19:30.443 [2024-11-28 11:50:00.307435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-11-28 11:50:00.307460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1475600, cid 3, qid 0 00:19:30.444 [2024-11-28 11:50:00.307522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.444 [2024-11-28 11:50:00.307529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.444 [2024-11-28 11:50:00.307533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.307537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1475600) on tqpair=0x141da10 00:19:30.444 [2024-11-28 11:50:00.307545] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:19:30.444 00:19:30.444 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:30.444 [2024-11-28 11:50:00.355828] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:30.444 [2024-11-28 11:50:00.355884] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90645 ] 00:19:30.444 [2024-11-28 11:50:00.479896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.444 [2024-11-28 11:50:00.518083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:30.444 [2024-11-28 11:50:00.518138] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:30.444 [2024-11-28 11:50:00.518144] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:30.444 [2024-11-28 11:50:00.518157] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:30.444 [2024-11-28 11:50:00.518167] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:30.444 [2024-11-28 11:50:00.518533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:30.444 [2024-11-28 11:50:00.518588] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a51a10 0 00:19:30.444 [2024-11-28 11:50:00.532349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:30.444 [2024-11-28 11:50:00.532370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:30.444 [2024-11-28 11:50:00.532376] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:30.444 [2024-11-28 11:50:00.532380] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:30.444 [2024-11-28 11:50:00.532414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.532420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.532426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.444 [2024-11-28 11:50:00.532438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:30.444 [2024-11-28 11:50:00.532469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.444 [2024-11-28 11:50:00.539379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.444 [2024-11-28 11:50:00.539400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.444 [2024-11-28 11:50:00.539405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.444 [2024-11-28 11:50:00.539419] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.444 [2024-11-28 11:50:00.539427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:30.444 [2024-11-28 11:50:00.539433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:30.444 [2024-11-28 11:50:00.539450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.444 [2024-11-28 11:50:00.539468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-11-28 11:50:00.539494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.444 [2024-11-28 11:50:00.539554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.444 [2024-11-28 11:50:00.539561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.444 [2024-11-28 11:50:00.539565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.444 [2024-11-28 11:50:00.539574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:30.444 [2024-11-28 11:50:00.539581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:30.444 [2024-11-28 11:50:00.539596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.444 [2024-11-28 11:50:00.539611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-11-28 11:50:00.539645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.444 [2024-11-28 11:50:00.539706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.444 [2024-11-28 11:50:00.539713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.444 [2024-11-28 11:50:00.539717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.444 [2024-11-28 11:50:00.539741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:30.444 [2024-11-28 11:50:00.539749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.444 [2024-11-28 11:50:00.539756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.444 [2024-11-28 11:50:00.539770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-11-28 11:50:00.539788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.444 [2024-11-28 11:50:00.539861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.444 [2024-11-28 11:50:00.539867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.444 [2024-11-28 11:50:00.539871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.444 [2024-11-28 11:50:00.539880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.444 [2024-11-28 11:50:00.539890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.444 [2024-11-28 11:50:00.539898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.539905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-11-28 11:50:00.539922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.539990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.539996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.540000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.445 [2024-11-28 11:50:00.540009] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:30.445 [2024-11-28 11:50:00.540028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:30.445 [2024-11-28 11:50:00.540035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.445 [2024-11-28 11:50:00.540141] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:30.445 [2024-11-28 11:50:00.540146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.445 [2024-11-28 11:50:00.540155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-11-28 11:50:00.540187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.540244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.540250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.540254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.445 [2024-11-28 11:50:00.540262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.445 [2024-11-28 11:50:00.540272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-11-28 11:50:00.540303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.540371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.540379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.540383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.445 [2024-11-28 11:50:00.540391] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.445 [2024-11-28 11:50:00.540396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:30.445 [2024-11-28 11:50:00.540414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-11-28 11:50:00.540455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.540560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.445 [2024-11-28 11:50:00.540567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.445 [2024-11-28 11:50:00.540570] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540574] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=4096, cccid=0 00:19:30.445 [2024-11-28 11:50:00.540578] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9180) on tqpair(0x1a51a10): expected_datao=0, payload_size=4096 00:19:30.445 [2024-11-28 11:50:00.540583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540590] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540594] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.540608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.540611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.445 [2024-11-28 11:50:00.540623] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:30.445 [2024-11-28 11:50:00.540628] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:30.445 [2024-11-28 11:50:00.540648] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:30.445 [2024-11-28 11:50:00.540659] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:30.445 [2024-11-28 11:50:00.540664] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:30.445 [2024-11-28 11:50:00.540669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.445 [2024-11-28 11:50:00.540738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.540803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.540810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.540813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.445 [2024-11-28 11:50:00.540825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.445 [2024-11-28 11:50:00.540846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.445 [2024-11-28 11:50:00.540865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.445 [2024-11-28 11:50:00.540883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.445 [2024-11-28 11:50:00.540900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.445 [2024-11-28 11:50:00.540915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.540919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.445 [2024-11-28 11:50:00.540926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-11-28 11:50:00.540951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9180, cid 0, qid 0 00:19:30.445 [2024-11-28 11:50:00.540959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9300, cid 1, qid 0 00:19:30.445 [2024-11-28 11:50:00.540963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9480, cid 2, qid 0 00:19:30.445 [2024-11-28 11:50:00.540968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.445 [2024-11-28 11:50:00.540973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.445 [2024-11-28 11:50:00.541088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.445 [2024-11-28 11:50:00.541095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.445 [2024-11-28 11:50:00.541098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.445 [2024-11-28 11:50:00.541102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.446 [2024-11-28 11:50:00.541108] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:30.446 [2024-11-28 11:50:00.541113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.541162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.446 [2024-11-28 11:50:00.541179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.446 [2024-11-28 11:50:00.541260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.446 [2024-11-28 11:50:00.541266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.446 [2024-11-28 11:50:00.541270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.446 [2024-11-28 11:50:00.541346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.541377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-11-28 11:50:00.541411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.446 [2024-11-28 11:50:00.541494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.446 [2024-11-28 11:50:00.541501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.446 [2024-11-28 11:50:00.541505] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=4096, cccid=4 00:19:30.446 [2024-11-28 11:50:00.541513] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9780) on tqpair(0x1a51a10): expected_datao=0, payload_size=4096 00:19:30.446 [2024-11-28 11:50:00.541517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541524] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.446 [2024-11-28 11:50:00.541541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.446 [2024-11-28 11:50:00.541545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.446 [2024-11-28 11:50:00.541559] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:30.446 [2024-11-28 11:50:00.541572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.541602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-11-28 11:50:00.541651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.446 [2024-11-28 11:50:00.541743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.446 [2024-11-28 11:50:00.541750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.446 [2024-11-28 11:50:00.541769] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=4096, cccid=4 00:19:30.446 [2024-11-28 11:50:00.541777] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9780) on tqpair(0x1a51a10): expected_datao=0, payload_size=4096 00:19:30.446 [2024-11-28 11:50:00.541781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541793] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.446 [2024-11-28 11:50:00.541806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.446 [2024-11-28 11:50:00.541809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.446 [2024-11-28 11:50:00.541830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.541849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.541860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-11-28 11:50:00.541879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.446 [2024-11-28 11:50:00.541946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.446 [2024-11-28 11:50:00.541952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.446 [2024-11-28 11:50:00.541956] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541959] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=4096, cccid=4 00:19:30.446 [2024-11-28 11:50:00.541965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9780) on tqpair(0x1a51a10): expected_datao=0, payload_size=4096 00:19:30.446 [2024-11-28 11:50:00.541969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541976] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541979] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.541987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.446 [2024-11-28 11:50:00.541993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.446 [2024-11-28 11:50:00.541996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.542000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.446 [2024-11-28 11:50:00.542023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542064] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:30.446 [2024-11-28 11:50:00.542068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:30.446 [2024-11-28 11:50:00.542073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:30.446 [2024-11-28 11:50:00.542090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.542094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.542101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-11-28 11:50:00.542123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.542127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.446 [2024-11-28 11:50:00.542130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a51a10) 00:19:30.446 [2024-11-28 11:50:00.542136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.446 [2024-11-28 11:50:00.542158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.446 [2024-11-28 11:50:00.542165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9900, cid 5, qid 0 00:19:30.446 [2024-11-28 11:50:00.542239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.446 [2024-11-28 11:50:00.542246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.542249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.542259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.542265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.542268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9900) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.542280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9900, cid 5, qid 0 00:19:30.447 [2024-11-28 11:50:00.542397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.542403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.542407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9900) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.542476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9900, cid 5, qid 0 00:19:30.447 [2024-11-28 11:50:00.542578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.542584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.542588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9900) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.542602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9900, cid 5, qid 0 00:19:30.447 [2024-11-28 11:50:00.542714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.542720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.542724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9900) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.542745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.542815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a51a10) 00:19:30.447 [2024-11-28 11:50:00.542821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-11-28 11:50:00.542840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9900, cid 5, qid 0 00:19:30.447 [2024-11-28 11:50:00.542847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9780, cid 4, qid 0 00:19:30.447 [2024-11-28 11:50:00.542851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9a80, cid 6, qid 0 00:19:30.447 [2024-11-28 11:50:00.542855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9c00, cid 7, qid 0 00:19:30.447 [2024-11-28 11:50:00.543045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.447 [2024-11-28 11:50:00.543059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.447 [2024-11-28 11:50:00.543063] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543067] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=8192, cccid=5 00:19:30.447 [2024-11-28 11:50:00.543071] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9900) on tqpair(0x1a51a10): expected_datao=0, payload_size=8192 00:19:30.447 [2024-11-28 11:50:00.543075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543091] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543096] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.447 [2024-11-28 11:50:00.543106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.447 [2024-11-28 11:50:00.543109] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543113] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=512, cccid=4 00:19:30.447 [2024-11-28 11:50:00.543117] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9780) on tqpair(0x1a51a10): expected_datao=0, payload_size=512 00:19:30.447 [2024-11-28 11:50:00.543121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543126] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.447 [2024-11-28 11:50:00.543139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.447 [2024-11-28 11:50:00.543142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543145] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=512, cccid=6 00:19:30.447 [2024-11-28 11:50:00.543149] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9a80) on tqpair(0x1a51a10): expected_datao=0, payload_size=512 00:19:30.447 [2024-11-28 11:50:00.543153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543161] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.447 [2024-11-28 11:50:00.543171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.447 [2024-11-28 11:50:00.543174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a51a10): datao=0, datal=4096, cccid=7 00:19:30.447 [2024-11-28 11:50:00.543181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa9c00) on tqpair(0x1a51a10): expected_datao=0, payload_size=4096 00:19:30.447 [2024-11-28 11:50:00.543185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543190] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.543206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.543209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9900) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.543242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.543248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.543251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9780) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.543267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.543273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.543276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9a80) on tqpair=0x1a51a10 00:19:30.447 [2024-11-28 11:50:00.543287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.447 [2024-11-28 11:50:00.543292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.447 [2024-11-28 11:50:00.543295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.447 [2024-11-28 11:50:00.543299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9c00) on tqpair=0x1a51a10 00:19:30.447 ===================================================== 00:19:30.447 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.447 ===================================================== 00:19:30.447 Controller Capabilities/Features 00:19:30.447 ================================ 00:19:30.447 Vendor ID: 8086 00:19:30.447 Subsystem Vendor ID: 8086 00:19:30.447 Serial Number: SPDK00000000000001 00:19:30.447 Model Number: SPDK bdev Controller 00:19:30.447 Firmware Version: 25.01 00:19:30.447 Recommended Arb Burst: 6 00:19:30.448 IEEE OUI Identifier: e4 d2 5c 00:19:30.448 Multi-path I/O 00:19:30.448 May have multiple subsystem ports: Yes 00:19:30.448 May have multiple controllers: Yes 00:19:30.448 Associated with SR-IOV VF: No 00:19:30.448 Max Data Transfer Size: 131072 00:19:30.448 Max Number of Namespaces: 32 00:19:30.448 Max Number of I/O Queues: 127 00:19:30.448 NVMe Specification Version (VS): 1.3 00:19:30.448 NVMe Specification Version (Identify): 1.3 00:19:30.448 Maximum Queue Entries: 128 00:19:30.448 Contiguous Queues Required: Yes 00:19:30.448 Arbitration Mechanisms Supported 00:19:30.448 Weighted Round Robin: Not Supported 00:19:30.448 Vendor Specific: Not Supported 00:19:30.448 Reset Timeout: 15000 ms 00:19:30.448 Doorbell Stride: 4 bytes 00:19:30.448 NVM Subsystem Reset: Not Supported 00:19:30.448 Command Sets Supported 00:19:30.448 NVM Command Set: Supported 00:19:30.448 Boot Partition: Not Supported 00:19:30.448 Memory Page Size Minimum: 4096 bytes 00:19:30.448 Memory Page Size Maximum: 4096 bytes 00:19:30.448 Persistent Memory Region: Not Supported 00:19:30.448 Optional Asynchronous Events Supported 00:19:30.448 Namespace Attribute Notices: Supported 00:19:30.448 Firmware Activation Notices: Not Supported 00:19:30.448 ANA Change Notices: Not Supported 00:19:30.448 PLE Aggregate Log Change Notices: Not Supported 00:19:30.448 LBA Status Info Alert Notices: Not Supported 00:19:30.448 EGE Aggregate Log Change Notices: Not Supported 00:19:30.448 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.448 Zone Descriptor Change Notices: Not Supported 00:19:30.448 Discovery Log Change Notices: Not Supported 00:19:30.448 Controller Attributes 00:19:30.448 128-bit Host Identifier: Supported 00:19:30.448 Non-Operational Permissive Mode: Not Supported 00:19:30.448 NVM Sets: Not Supported 00:19:30.448 Read Recovery Levels: Not Supported 00:19:30.448 Endurance Groups: Not Supported 00:19:30.448 Predictable Latency Mode: Not Supported 00:19:30.448 Traffic Based Keep ALive: Not Supported 00:19:30.448 Namespace Granularity: Not Supported 00:19:30.448 SQ Associations: Not Supported 00:19:30.448 UUID List: Not Supported 00:19:30.448 Multi-Domain Subsystem: Not Supported 00:19:30.448 Fixed Capacity Management: Not Supported 00:19:30.448 Variable Capacity Management: Not Supported 00:19:30.448 Delete Endurance Group: Not Supported 00:19:30.448 Delete NVM Set: Not Supported 00:19:30.448 Extended LBA Formats Supported: Not Supported 00:19:30.448 Flexible Data Placement Supported: Not Supported 00:19:30.448 00:19:30.448 Controller Memory Buffer Support 00:19:30.448 ================================ 00:19:30.448 Supported: No 00:19:30.448 00:19:30.448 Persistent Memory Region Support 00:19:30.448 ================================ 00:19:30.448 Supported: No 00:19:30.448 00:19:30.448 Admin Command Set Attributes 00:19:30.448 ============================ 00:19:30.448 Security Send/Receive: Not Supported 00:19:30.448 Format NVM: Not Supported 00:19:30.448 Firmware Activate/Download: Not Supported 00:19:30.448 Namespace Management: Not Supported 00:19:30.448 Device Self-Test: Not Supported 00:19:30.448 Directives: Not Supported 00:19:30.448 NVMe-MI: Not Supported 00:19:30.448 Virtualization Management: Not Supported 00:19:30.448 Doorbell Buffer Config: Not Supported 00:19:30.448 Get LBA Status Capability: Not Supported 00:19:30.448 Command & Feature Lockdown Capability: Not Supported 00:19:30.448 Abort Command Limit: 4 00:19:30.448 Async Event Request Limit: 4 00:19:30.448 Number of Firmware Slots: N/A 00:19:30.448 Firmware Slot 1 Read-Only: N/A 00:19:30.448 Firmware Activation Without Reset: N/A 00:19:30.448 Multiple Update Detection Support: N/A 00:19:30.448 Firmware Update Granularity: No Information Provided 00:19:30.448 Per-Namespace SMART Log: No 00:19:30.448 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.448 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:30.448 Command Effects Log Page: Supported 00:19:30.448 Get Log Page Extended Data: Supported 00:19:30.448 Telemetry Log Pages: Not Supported 00:19:30.448 Persistent Event Log Pages: Not Supported 00:19:30.448 Supported Log Pages Log Page: May Support 00:19:30.448 Commands Supported & Effects Log Page: Not Supported 00:19:30.448 Feature Identifiers & Effects Log Page:May Support 00:19:30.448 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.448 Data Area 4 for Telemetry Log: Not Supported 00:19:30.448 Error Log Page Entries Supported: 128 00:19:30.448 Keep Alive: Supported 00:19:30.448 Keep Alive Granularity: 10000 ms 00:19:30.448 00:19:30.448 NVM Command Set Attributes 00:19:30.448 ========================== 00:19:30.448 Submission Queue Entry Size 00:19:30.448 Max: 64 00:19:30.448 Min: 64 00:19:30.448 Completion Queue Entry Size 00:19:30.448 Max: 16 00:19:30.448 Min: 16 00:19:30.448 Number of Namespaces: 32 00:19:30.448 Compare Command: Supported 00:19:30.448 Write Uncorrectable Command: Not Supported 00:19:30.448 Dataset Management Command: Supported 00:19:30.448 Write Zeroes Command: Supported 00:19:30.448 Set Features Save Field: Not Supported 00:19:30.448 Reservations: Supported 00:19:30.448 Timestamp: Not Supported 00:19:30.448 Copy: Supported 00:19:30.448 Volatile Write Cache: Present 00:19:30.448 Atomic Write Unit (Normal): 1 00:19:30.448 Atomic Write Unit (PFail): 1 00:19:30.448 Atomic Compare & Write Unit: 1 00:19:30.448 Fused Compare & Write: Supported 00:19:30.448 Scatter-Gather List 00:19:30.448 SGL Command Set: Supported 00:19:30.448 SGL Keyed: Supported 00:19:30.448 SGL Bit Bucket Descriptor: Not Supported 00:19:30.448 SGL Metadata Pointer: Not Supported 00:19:30.448 Oversized SGL: Not Supported 00:19:30.448 SGL Metadata Address: Not Supported 00:19:30.448 SGL Offset: Supported 00:19:30.448 Transport SGL Data Block: Not Supported 00:19:30.448 Replay Protected Memory Block: Not Supported 00:19:30.448 00:19:30.448 Firmware Slot Information 00:19:30.448 ========================= 00:19:30.448 Active slot: 1 00:19:30.448 Slot 1 Firmware Revision: 25.01 00:19:30.448 00:19:30.448 00:19:30.448 Commands Supported and Effects 00:19:30.448 ============================== 00:19:30.448 Admin Commands 00:19:30.448 -------------- 00:19:30.448 Get Log Page (02h): Supported 00:19:30.448 Identify (06h): Supported 00:19:30.448 Abort (08h): Supported 00:19:30.448 Set Features (09h): Supported 00:19:30.448 Get Features (0Ah): Supported 00:19:30.448 Asynchronous Event Request (0Ch): Supported 00:19:30.448 Keep Alive (18h): Supported 00:19:30.448 I/O Commands 00:19:30.448 ------------ 00:19:30.448 Flush (00h): Supported LBA-Change 00:19:30.448 Write (01h): Supported LBA-Change 00:19:30.448 Read (02h): Supported 00:19:30.448 Compare (05h): Supported 00:19:30.448 Write Zeroes (08h): Supported LBA-Change 00:19:30.448 Dataset Management (09h): Supported LBA-Change 00:19:30.448 Copy (19h): Supported LBA-Change 00:19:30.448 00:19:30.448 Error Log 00:19:30.448 ========= 00:19:30.448 00:19:30.448 Arbitration 00:19:30.448 =========== 00:19:30.448 Arbitration Burst: 1 00:19:30.448 00:19:30.448 Power Management 00:19:30.448 ================ 00:19:30.448 Number of Power States: 1 00:19:30.448 Current Power State: Power State #0 00:19:30.448 Power State #0: 00:19:30.448 Max Power: 0.00 W 00:19:30.448 Non-Operational State: Operational 00:19:30.448 Entry Latency: Not Reported 00:19:30.448 Exit Latency: Not Reported 00:19:30.448 Relative Read Throughput: 0 00:19:30.448 Relative Read Latency: 0 00:19:30.448 Relative Write Throughput: 0 00:19:30.448 Relative Write Latency: 0 00:19:30.448 Idle Power: Not Reported 00:19:30.448 Active Power: Not Reported 00:19:30.448 Non-Operational Permissive Mode: Not Supported 00:19:30.449 00:19:30.449 Health Information 00:19:30.449 ================== 00:19:30.449 Critical Warnings: 00:19:30.449 Available Spare Space: OK 00:19:30.449 Temperature: OK 00:19:30.449 Device Reliability: OK 00:19:30.449 Read Only: No 00:19:30.449 Volatile Memory Backup: OK 00:19:30.449 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:30.449 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:30.449 Available Spare: 0% 00:19:30.449 Available Spare Threshold: 0% 00:19:30.449 Life Percentage Used:[2024-11-28 11:50:00.546529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.546549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.546576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9c00, cid 7, qid 0 00:19:30.449 [2024-11-28 11:50:00.546652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.546659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.546663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9c00) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546738] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:30.449 [2024-11-28 11:50:00.546749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9180) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.449 [2024-11-28 11:50:00.546777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9300) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.449 [2024-11-28 11:50:00.546786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9480) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.449 [2024-11-28 11:50:00.546794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.449 [2024-11-28 11:50:00.546807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.546822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.546843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.546918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.546925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.546928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.546939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.546947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.546954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.546974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547073] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:30.449 [2024-11-28 11:50:00.547077] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:30.449 [2024-11-28 11:50:00.547087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.547101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.547118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.547210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.547227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.547336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.547354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.547448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.547467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.449 [2024-11-28 11:50:00.547574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.449 [2024-11-28 11:50:00.547592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.449 [2024-11-28 11:50:00.547675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.449 [2024-11-28 11:50:00.547681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.449 [2024-11-28 11:50:00.547685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.449 [2024-11-28 11:50:00.547689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.449 [2024-11-28 11:50:00.547699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.547713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.547730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.547799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.547805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.547809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.547822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.547836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.547853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.547919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.547926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.547929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.547943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.547950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.547957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.547975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.548905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.450 [2024-11-28 11:50:00.548911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.450 [2024-11-28 11:50:00.548915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.450 [2024-11-28 11:50:00.548928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.450 [2024-11-28 11:50:00.548951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.450 [2024-11-28 11:50:00.548958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.450 [2024-11-28 11:50:00.548976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.450 [2024-11-28 11:50:00.549045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.549891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.549898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.549901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.549915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.549922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.549929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.549946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.550004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.550011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.550014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.550027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.550055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.550071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.550126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.550132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.550136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.550148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.550162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.550178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.550235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.550241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.550244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.550259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.550266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.550272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.550289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.451 [2024-11-28 11:50:00.554456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.451 [2024-11-28 11:50:00.554474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.451 [2024-11-28 11:50:00.554479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.554484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.451 [2024-11-28 11:50:00.554498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.554504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.451 [2024-11-28 11:50:00.554508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a51a10) 00:19:30.451 [2024-11-28 11:50:00.554516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.451 [2024-11-28 11:50:00.554541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa9600, cid 3, qid 0 00:19:30.452 [2024-11-28 11:50:00.554605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.452 [2024-11-28 11:50:00.554612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.452 [2024-11-28 11:50:00.554615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.452 [2024-11-28 11:50:00.554619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa9600) on tqpair=0x1a51a10 00:19:30.452 [2024-11-28 11:50:00.554627] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:19:30.712 0% 00:19:30.712 Data Units Read: 0 00:19:30.712 Data Units Written: 0 00:19:30.712 Host Read Commands: 0 00:19:30.712 Host Write Commands: 0 00:19:30.712 Controller Busy Time: 0 minutes 00:19:30.712 Power Cycles: 0 00:19:30.712 Power On Hours: 0 hours 00:19:30.712 Unsafe Shutdowns: 0 00:19:30.712 Unrecoverable Media Errors: 0 00:19:30.712 Lifetime Error Log Entries: 0 00:19:30.712 Warning Temperature Time: 0 minutes 00:19:30.712 Critical Temperature Time: 0 minutes 00:19:30.712 00:19:30.712 Number of Queues 00:19:30.712 ================ 00:19:30.712 Number of I/O Submission Queues: 127 00:19:30.712 Number of I/O Completion Queues: 127 00:19:30.712 00:19:30.712 Active Namespaces 00:19:30.712 ================= 00:19:30.712 Namespace ID:1 00:19:30.712 Error Recovery Timeout: Unlimited 00:19:30.712 Command Set Identifier: NVM (00h) 00:19:30.712 Deallocate: Supported 00:19:30.712 Deallocated/Unwritten Error: Not Supported 00:19:30.712 Deallocated Read Value: Unknown 00:19:30.712 Deallocate in Write Zeroes: Not Supported 00:19:30.712 Deallocated Guard Field: 0xFFFF 00:19:30.712 Flush: Supported 00:19:30.712 Reservation: Supported 00:19:30.712 Namespace Sharing Capabilities: Multiple Controllers 00:19:30.712 Size (in LBAs): 131072 (0GiB) 00:19:30.712 Capacity (in LBAs): 131072 (0GiB) 00:19:30.712 Utilization (in LBAs): 131072 (0GiB) 00:19:30.712 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:30.712 EUI64: ABCDEF0123456789 00:19:30.712 UUID: 90ce9792-f5da-48f7-8e71-e513a8dfb0d8 00:19:30.712 Thin Provisioning: Not Supported 00:19:30.712 Per-NS Atomic Units: Yes 00:19:30.712 Atomic Boundary Size (Normal): 0 00:19:30.712 Atomic Boundary Size (PFail): 0 00:19:30.712 Atomic Boundary Offset: 0 00:19:30.712 Maximum Single Source Range Length: 65535 00:19:30.712 Maximum Copy Length: 65535 00:19:30.712 Maximum Source Range Count: 1 00:19:30.712 NGUID/EUI64 Never Reused: No 00:19:30.712 Namespace Write Protected: No 00:19:30.712 Number of LBA Formats: 1 00:19:30.712 Current LBA Format: LBA Format #00 00:19:30.712 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:30.712 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.712 rmmod nvme_tcp 00:19:30.712 rmmod nvme_fabrics 00:19:30.712 rmmod nvme_keyring 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 90614 ']' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 90614 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 90614 ']' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 90614 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90614 00:19:30.712 killing process with pid 90614 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90614' 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 90614 00:19:30.712 11:50:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 90614 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:30.971 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:19:31.230 ************************************ 00:19:31.230 END TEST nvmf_identify 00:19:31.230 ************************************ 00:19:31.230 00:19:31.230 real 0m2.392s 00:19:31.230 user 0m5.107s 00:19:31.230 sys 0m0.823s 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.230 11:50:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.231 ************************************ 00:19:31.231 START TEST nvmf_perf 00:19:31.231 ************************************ 00:19:31.231 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:31.491 * Looking for test storage... 00:19:31.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.491 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.492 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:31.493 Cannot find device "nvmf_init_br" 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:31.493 Cannot find device "nvmf_init_br2" 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:31.493 Cannot find device "nvmf_tgt_br" 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.493 Cannot find device "nvmf_tgt_br2" 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:31.493 Cannot find device "nvmf_init_br" 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:19:31.493 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:31.753 Cannot find device "nvmf_init_br2" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:31.753 Cannot find device "nvmf_tgt_br" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:31.753 Cannot find device "nvmf_tgt_br2" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:31.753 Cannot find device "nvmf_br" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:31.753 Cannot find device "nvmf_init_if" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:31.753 Cannot find device "nvmf_init_if2" 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.753 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:32.012 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:32.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:32.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:32.013 00:19:32.013 --- 10.0.0.3 ping statistics --- 00:19:32.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.013 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:32.013 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:32.013 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:32.013 00:19:32.013 --- 10.0.0.4 ping statistics --- 00:19:32.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.013 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:32.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:32.013 00:19:32.013 --- 10.0.0.1 ping statistics --- 00:19:32.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.013 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:32.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:32.013 00:19:32.013 --- 10.0.0.2 ping statistics --- 00:19:32.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.013 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.013 11:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=90863 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 90863 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 90863 ']' 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.013 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:32.013 [2024-11-28 11:50:02.068368] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:32.013 [2024-11-28 11:50:02.068664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.272 [2024-11-28 11:50:02.197871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:32.272 [2024-11-28 11:50:02.221250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.272 [2024-11-28 11:50:02.263186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.272 [2024-11-28 11:50:02.263544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.272 [2024-11-28 11:50:02.263839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.272 [2024-11-28 11:50:02.264066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.272 [2024-11-28 11:50:02.264155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.272 [2024-11-28 11:50:02.265525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.272 [2024-11-28 11:50:02.265660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.272 [2024-11-28 11:50:02.265729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.272 [2024-11-28 11:50:02.265727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.272 [2024-11-28 11:50:02.337040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:32.532 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:32.790 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:33.048 11:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:33.307 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:33.307 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.566 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:33.566 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:33.566 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:33.566 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:33.566 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.825 [2024-11-28 11:50:03.756899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.825 11:50:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:34.084 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:34.084 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:34.343 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:34.343 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:34.601 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:34.859 [2024-11-28 11:50:04.807249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:34.859 11:50:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:35.117 11:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:35.117 11:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:35.117 11:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:35.117 11:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:36.053 Initializing NVMe Controllers 00:19:36.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:36.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:36.053 Initialization complete. Launching workers. 00:19:36.053 ======================================================== 00:19:36.053 Latency(us) 00:19:36.053 Device Information : IOPS MiB/s Average min max 00:19:36.053 PCIE (0000:00:10.0) NSID 1 from core 0: 20832.00 81.38 1535.75 404.13 7973.02 00:19:36.053 ======================================================== 00:19:36.053 Total : 20832.00 81.38 1535.75 404.13 7973.02 00:19:36.053 00:19:36.053 11:50:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:37.428 Initializing NVMe Controllers 00:19:37.428 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:37.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:37.428 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:37.428 Initialization complete. Launching workers. 00:19:37.428 ======================================================== 00:19:37.428 Latency(us) 00:19:37.428 Device Information : IOPS MiB/s Average min max 00:19:37.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3542.26 13.84 282.01 100.79 7271.39 00:19:37.428 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.62 0.49 8087.41 5017.73 13973.88 00:19:37.428 ======================================================== 00:19:37.428 Total : 3666.88 14.32 547.28 100.79 13973.88 00:19:37.428 00:19:37.428 11:50:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:38.806 Initializing NVMe Controllers 00:19:38.806 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.806 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:38.806 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:38.806 Initialization complete. Launching workers. 00:19:38.806 ======================================================== 00:19:38.806 Latency(us) 00:19:38.807 Device Information : IOPS MiB/s Average min max 00:19:38.807 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9249.61 36.13 3460.84 513.08 10173.15 00:19:38.807 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3356.34 13.11 9560.62 3776.04 16252.10 00:19:38.807 ======================================================== 00:19:38.807 Total : 12605.95 49.24 5084.91 513.08 16252.10 00:19:38.807 00:19:38.807 11:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:38.807 11:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:41.340 Initializing NVMe Controllers 00:19:41.340 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.340 Controller IO queue size 128, less than required. 00:19:41.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.340 Controller IO queue size 128, less than required. 00:19:41.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.340 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:41.340 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:41.340 Initialization complete. Launching workers. 00:19:41.340 ======================================================== 00:19:41.340 Latency(us) 00:19:41.340 Device Information : IOPS MiB/s Average min max 00:19:41.340 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1598.23 399.56 81633.47 40470.04 128502.13 00:19:41.340 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 634.00 158.50 206539.39 39631.27 332107.90 00:19:41.340 ======================================================== 00:19:41.340 Total : 2232.23 558.06 117109.21 39631.27 332107.90 00:19:41.340 00:19:41.340 11:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:41.599 Initializing NVMe Controllers 00:19:41.599 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.599 Controller IO queue size 128, less than required. 00:19:41.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.599 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:41.599 Controller IO queue size 128, less than required. 00:19:41.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.599 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:41.599 WARNING: Some requested NVMe devices were skipped 00:19:41.599 No valid NVMe controllers or AIO or URING devices found 00:19:41.599 11:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:44.132 Initializing NVMe Controllers 00:19:44.132 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.132 Controller IO queue size 128, less than required. 00:19:44.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:44.132 Controller IO queue size 128, less than required. 00:19:44.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:44.132 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:44.132 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:44.132 Initialization complete. Launching workers. 00:19:44.132 00:19:44.132 ==================== 00:19:44.132 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:44.132 TCP transport: 00:19:44.132 polls: 8762 00:19:44.132 idle_polls: 4842 00:19:44.132 sock_completions: 3920 00:19:44.132 nvme_completions: 5695 00:19:44.132 submitted_requests: 8562 00:19:44.132 queued_requests: 1 00:19:44.132 00:19:44.133 ==================== 00:19:44.133 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:44.133 TCP transport: 00:19:44.133 polls: 10338 00:19:44.133 idle_polls: 6578 00:19:44.133 sock_completions: 3760 00:19:44.133 nvme_completions: 5675 00:19:44.133 submitted_requests: 8432 00:19:44.133 queued_requests: 1 00:19:44.133 ======================================================== 00:19:44.133 Latency(us) 00:19:44.133 Device Information : IOPS MiB/s Average min max 00:19:44.133 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1423.16 355.79 92056.15 48679.62 161390.61 00:19:44.133 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1418.16 354.54 90549.83 32474.37 118163.52 00:19:44.133 ======================================================== 00:19:44.133 Total : 2841.31 710.33 91304.31 32474.37 161390.61 00:19:44.133 00:19:44.133 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:44.133 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.701 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:19:44.701 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:19:44.701 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:19:44.959 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d17dd693-7bc4-43aa-abda-fc9888f5c495 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d17dd693-7bc4-43aa-abda-fc9888f5c495 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=d17dd693-7bc4-43aa-abda-fc9888f5c495 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:19:44.960 11:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:45.218 { 00:19:45.218 "uuid": "d17dd693-7bc4-43aa-abda-fc9888f5c495", 00:19:45.218 "name": "lvs_0", 00:19:45.218 "base_bdev": "Nvme0n1", 00:19:45.218 "total_data_clusters": 1278, 00:19:45.218 "free_clusters": 1278, 00:19:45.218 "block_size": 4096, 00:19:45.218 "cluster_size": 4194304 00:19:45.218 } 00:19:45.218 ]' 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d17dd693-7bc4-43aa-abda-fc9888f5c495") .free_clusters' 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d17dd693-7bc4-43aa-abda-fc9888f5c495") .cluster_size' 00:19:45.218 5112 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:19:45.218 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d17dd693-7bc4-43aa-abda-fc9888f5c495 lbd_0 5112 00:19:45.477 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d8bd51eb-5233-492b-8562-082ee9ea6de9 00:19:45.477 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore d8bd51eb-5233-492b-8562-082ee9ea6de9 lvs_n_0 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=f3ec3372-5ad5-46d7-b790-3e6063cee4b2 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb f3ec3372-5ad5-46d7-b790-3e6063cee4b2 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=f3ec3372-5ad5-46d7-b790-3e6063cee4b2 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:19:46.045 11:50:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:46.305 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:46.305 { 00:19:46.305 "uuid": "d17dd693-7bc4-43aa-abda-fc9888f5c495", 00:19:46.305 "name": "lvs_0", 00:19:46.305 "base_bdev": "Nvme0n1", 00:19:46.305 "total_data_clusters": 1278, 00:19:46.305 "free_clusters": 0, 00:19:46.305 "block_size": 4096, 00:19:46.305 "cluster_size": 4194304 00:19:46.305 }, 00:19:46.305 { 00:19:46.305 "uuid": "f3ec3372-5ad5-46d7-b790-3e6063cee4b2", 00:19:46.305 "name": "lvs_n_0", 00:19:46.305 "base_bdev": "d8bd51eb-5233-492b-8562-082ee9ea6de9", 00:19:46.305 "total_data_clusters": 1276, 00:19:46.306 "free_clusters": 1276, 00:19:46.306 "block_size": 4096, 00:19:46.306 "cluster_size": 4194304 00:19:46.306 } 00:19:46.306 ]' 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f3ec3372-5ad5-46d7-b790-3e6063cee4b2") .free_clusters' 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f3ec3372-5ad5-46d7-b790-3e6063cee4b2") .cluster_size' 00:19:46.306 5104 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:19:46.306 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3ec3372-5ad5-46d7-b790-3e6063cee4b2 lbd_nest_0 5104 00:19:46.566 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=77adcb1b-f2b5-4a8b-aa17-1912a9ec683e 00:19:46.566 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.842 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:19:46.842 11:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 77adcb1b-f2b5-4a8b-aa17-1912a9ec683e 00:19:47.109 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:47.367 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:19:47.367 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:19:47.367 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:47.367 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:47.367 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:47.626 Initializing NVMe Controllers 00:19:47.626 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.626 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:47.626 WARNING: Some requested NVMe devices were skipped 00:19:47.626 No valid NVMe controllers or AIO or URING devices found 00:19:47.885 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:47.885 11:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:00.096 Initializing NVMe Controllers 00:20:00.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:00.096 Initialization complete. Launching workers. 00:20:00.096 ======================================================== 00:20:00.096 Latency(us) 00:20:00.096 Device Information : IOPS MiB/s Average min max 00:20:00.096 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 786.26 98.28 1271.37 378.19 8533.54 00:20:00.096 ======================================================== 00:20:00.096 Total : 786.26 98.28 1271.37 378.19 8533.54 00:20:00.096 00:20:00.096 11:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:00.096 11:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:00.096 11:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:00.096 Initializing NVMe Controllers 00:20:00.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.096 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:00.096 WARNING: Some requested NVMe devices were skipped 00:20:00.096 No valid NVMe controllers or AIO or URING devices found 00:20:00.096 11:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:00.096 11:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:10.102 Initializing NVMe Controllers 00:20:10.102 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.102 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.102 Initialization complete. Launching workers. 00:20:10.102 ======================================================== 00:20:10.102 Latency(us) 00:20:10.102 Device Information : IOPS MiB/s Average min max 00:20:10.102 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1376.60 172.07 23279.24 6351.74 59795.68 00:20:10.102 ======================================================== 00:20:10.102 Total : 1376.60 172.07 23279.24 6351.74 59795.68 00:20:10.102 00:20:10.102 11:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:10.102 11:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:10.102 11:50:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:10.102 Initializing NVMe Controllers 00:20:10.102 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.102 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:10.102 WARNING: Some requested NVMe devices were skipped 00:20:10.102 No valid NVMe controllers or AIO or URING devices found 00:20:10.102 11:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:10.102 11:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:20.092 Initializing NVMe Controllers 00:20:20.092 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.092 Controller IO queue size 128, less than required. 00:20:20.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.092 Initialization complete. Launching workers. 00:20:20.092 ======================================================== 00:20:20.092 Latency(us) 00:20:20.092 Device Information : IOPS MiB/s Average min max 00:20:20.092 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3960.17 495.02 32339.63 11861.84 65908.97 00:20:20.092 ======================================================== 00:20:20.092 Total : 3960.17 495.02 32339.63 11861.84 65908.97 00:20:20.092 00:20:20.092 11:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.092 11:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 77adcb1b-f2b5-4a8b-aa17-1912a9ec683e 00:20:20.092 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:20.351 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d8bd51eb-5233-492b-8562-082ee9ea6de9 00:20:20.611 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.870 rmmod nvme_tcp 00:20:20.870 rmmod nvme_fabrics 00:20:20.870 rmmod nvme_keyring 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 90863 ']' 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 90863 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 90863 ']' 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 90863 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90863 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.870 killing process with pid 90863 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90863' 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 90863 00:20:20.870 11:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 90863 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:20:22.776 ************************************ 00:20:22.776 END TEST nvmf_perf 00:20:22.776 ************************************ 00:20:22.776 00:20:22.776 real 0m51.296s 00:20:22.776 user 3m13.485s 00:20:22.776 sys 0m11.390s 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.776 ************************************ 00:20:22.776 START TEST nvmf_fio_host 00:20:22.776 ************************************ 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:22.776 * Looking for test storage... 00:20:22.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.776 --rc genhtml_branch_coverage=1 00:20:22.776 --rc genhtml_function_coverage=1 00:20:22.776 --rc genhtml_legend=1 00:20:22.776 --rc geninfo_all_blocks=1 00:20:22.776 --rc geninfo_unexecuted_blocks=1 00:20:22.776 00:20:22.776 ' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.776 --rc genhtml_branch_coverage=1 00:20:22.776 --rc genhtml_function_coverage=1 00:20:22.776 --rc genhtml_legend=1 00:20:22.776 --rc geninfo_all_blocks=1 00:20:22.776 --rc geninfo_unexecuted_blocks=1 00:20:22.776 00:20:22.776 ' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.776 --rc genhtml_branch_coverage=1 00:20:22.776 --rc genhtml_function_coverage=1 00:20:22.776 --rc genhtml_legend=1 00:20:22.776 --rc geninfo_all_blocks=1 00:20:22.776 --rc geninfo_unexecuted_blocks=1 00:20:22.776 00:20:22.776 ' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.776 --rc genhtml_branch_coverage=1 00:20:22.776 --rc genhtml_function_coverage=1 00:20:22.776 --rc genhtml_legend=1 00:20:22.776 --rc geninfo_all_blocks=1 00:20:22.776 --rc geninfo_unexecuted_blocks=1 00:20:22.776 00:20:22.776 ' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.776 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.777 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:23.034 Cannot find device "nvmf_init_br" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:23.034 Cannot find device "nvmf_init_br2" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:23.034 Cannot find device "nvmf_tgt_br" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.034 Cannot find device "nvmf_tgt_br2" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:23.034 Cannot find device "nvmf_init_br" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:23.034 Cannot find device "nvmf_init_br2" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:23.034 Cannot find device "nvmf_tgt_br" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:23.034 Cannot find device "nvmf_tgt_br2" 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:20:23.034 11:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:23.034 Cannot find device "nvmf_br" 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:23.034 Cannot find device "nvmf_init_if" 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:23.034 Cannot find device "nvmf_init_if2" 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:23.034 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:23.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:23.292 00:20:23.292 --- 10.0.0.3 ping statistics --- 00:20:23.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.292 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:23.292 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:23.292 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:20:23.292 00:20:23.292 --- 10.0.0.4 ping statistics --- 00:20:23.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.292 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:23.292 00:20:23.292 --- 10.0.0.1 ping statistics --- 00:20:23.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.292 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:23.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:23.292 00:20:23.292 --- 10.0.0.2 ping statistics --- 00:20:23.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.292 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=91720 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 91720 00:20:23.292 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 91720 ']' 00:20:23.293 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.293 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.293 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.293 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.293 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.293 [2024-11-28 11:50:53.356973] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:23.293 [2024-11-28 11:50:53.357043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.551 [2024-11-28 11:50:53.480787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:23.551 [2024-11-28 11:50:53.504446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.551 [2024-11-28 11:50:53.542753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.551 [2024-11-28 11:50:53.543048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.551 [2024-11-28 11:50:53.543186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.551 [2024-11-28 11:50:53.543321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.551 [2024-11-28 11:50:53.543365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.551 [2024-11-28 11:50:53.544553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.551 [2024-11-28 11:50:53.544607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.551 [2024-11-28 11:50:53.544679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.551 [2024-11-28 11:50:53.544683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.551 [2024-11-28 11:50:53.602912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:23.551 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.551 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:20:23.551 11:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:24.116 [2024-11-28 11:50:53.978215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.116 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:24.116 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.116 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.116 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:24.373 Malloc1 00:20:24.373 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.632 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:24.897 11:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:25.162 [2024-11-28 11:50:55.219906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.162 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.420 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:25.679 11:50:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:25.679 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:25.679 fio-3.35 00:20:25.679 Starting 1 thread 00:20:28.211 00:20:28.211 test: (groupid=0, jobs=1): err= 0: pid=91794: Thu Nov 28 11:50:58 2024 00:20:28.211 read: IOPS=8825, BW=34.5MiB/s (36.1MB/s)(69.2MiB/2007msec) 00:20:28.211 slat (nsec): min=1828, max=373964, avg=2486.86, stdev=3782.75 00:20:28.211 clat (usec): min=2733, max=13477, avg=7551.94, stdev=596.19 00:20:28.211 lat (usec): min=2782, max=13479, avg=7554.42, stdev=596.00 00:20:28.211 clat percentiles (usec): 00:20:28.211 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:20:28.211 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:20:28.211 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:20:28.211 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12387], 00:20:28.211 | 99.99th=[13435] 00:20:28.211 bw ( KiB/s): min=32800, max=36984, per=99.99%, avg=35296.00, stdev=1794.37, samples=4 00:20:28.211 iops : min= 8200, max= 9246, avg=8824.00, stdev=448.59, samples=4 00:20:28.211 write: IOPS=8837, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec); 0 zone resets 00:20:28.211 slat (nsec): min=1910, max=271580, avg=2569.14, stdev=2607.19 00:20:28.211 clat (usec): min=2595, max=12555, avg=6880.14, stdev=552.46 00:20:28.211 lat (usec): min=2609, max=12557, avg=6882.71, stdev=552.42 00:20:28.211 clat percentiles (usec): 00:20:28.211 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:20:28.211 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:20:28.211 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7767], 00:20:28.211 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11600], 00:20:28.211 | 99.99th=[12518] 00:20:28.211 bw ( KiB/s): min=33672, max=36992, per=99.99%, avg=35346.00, stdev=1357.75, samples=4 00:20:28.211 iops : min= 8418, max= 9248, avg=8836.50, stdev=339.44, samples=4 00:20:28.211 lat (msec) : 4=0.08%, 10=99.75%, 20=0.17% 00:20:28.211 cpu : usr=72.58%, sys=20.79%, ctx=3, majf=0, minf=4 00:20:28.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.211 issued rwts: total=17712,17736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.211 00:20:28.211 Run status group 0 (all jobs): 00:20:28.211 READ: bw=34.5MiB/s (36.1MB/s), 34.5MiB/s-34.5MiB/s (36.1MB/s-36.1MB/s), io=69.2MiB (72.5MB), run=2007-2007msec 00:20:28.211 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.6MB), run=2007-2007msec 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:28.211 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:28.212 11:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:28.212 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:28.212 fio-3.35 00:20:28.212 Starting 1 thread 00:20:30.743 00:20:30.743 test: (groupid=0, jobs=1): err= 0: pid=91844: Thu Nov 28 11:51:00 2024 00:20:30.743 read: IOPS=7928, BW=124MiB/s (130MB/s)(249MiB/2010msec) 00:20:30.743 slat (usec): min=2, max=108, avg= 3.69, stdev= 2.29 00:20:30.743 clat (usec): min=1586, max=19346, avg=9054.88, stdev=2722.29 00:20:30.743 lat (usec): min=1589, max=19349, avg=9058.57, stdev=2722.32 00:20:30.743 clat percentiles (usec): 00:20:30.743 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6783], 00:20:30.743 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:20:30.743 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12780], 95.00th=[14484], 00:20:30.743 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18220], 99.95th=[18220], 00:20:30.743 | 99.99th=[18482] 00:20:30.744 bw ( KiB/s): min=59520, max=75968, per=51.92%, avg=65864.00, stdev=7529.75, samples=4 00:20:30.744 iops : min= 3720, max= 4748, avg=4116.50, stdev=470.61, samples=4 00:20:30.744 write: IOPS=4623, BW=72.2MiB/s (75.8MB/s)(135MiB/1867msec); 0 zone resets 00:20:30.744 slat (usec): min=31, max=398, avg=37.83, stdev=10.08 00:20:30.744 clat (usec): min=1632, max=25936, avg=12436.17, stdev=2288.98 00:20:30.744 lat (usec): min=1664, max=25969, avg=12474.00, stdev=2290.06 00:20:30.744 clat percentiles (usec): 00:20:30.744 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10421], 00:20:30.744 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 00:20:30.744 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15401], 95.00th=[16450], 00:20:30.744 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20579], 99.95th=[20841], 00:20:30.744 | 99.99th=[25822] 00:20:30.744 bw ( KiB/s): min=61472, max=79232, per=92.65%, avg=68536.00, stdev=8021.87, samples=4 00:20:30.744 iops : min= 3842, max= 4952, avg=4283.50, stdev=501.37, samples=4 00:20:30.744 lat (msec) : 2=0.03%, 4=0.42%, 10=48.93%, 20=50.55%, 50=0.06% 00:20:30.744 cpu : usr=83.08%, sys=13.19%, ctx=9, majf=0, minf=18 00:20:30.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:30.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.744 issued rwts: total=15936,8632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.744 00:20:30.744 Run status group 0 (all jobs): 00:20:30.744 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2010-2010msec 00:20:30.744 WRITE: bw=72.2MiB/s (75.8MB/s), 72.2MiB/s-72.2MiB/s (75.8MB/s-75.8MB/s), io=135MiB (141MB), run=1867-1867msec 00:20:30.744 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:31.003 11:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:20:31.262 Nvme0n1 00:20:31.262 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=78803aa5-15d3-48d6-9524-3d70e0f7dad2 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 78803aa5-15d3-48d6-9524-3d70e0f7dad2 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=78803aa5-15d3-48d6-9524-3d70e0f7dad2 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:20:31.520 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:31.778 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:31.778 { 00:20:31.778 "uuid": "78803aa5-15d3-48d6-9524-3d70e0f7dad2", 00:20:31.778 "name": "lvs_0", 00:20:31.778 "base_bdev": "Nvme0n1", 00:20:31.778 "total_data_clusters": 4, 00:20:31.778 "free_clusters": 4, 00:20:31.778 "block_size": 4096, 00:20:31.778 "cluster_size": 1073741824 00:20:31.778 } 00:20:31.778 ]' 00:20:31.778 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="78803aa5-15d3-48d6-9524-3d70e0f7dad2") .free_clusters' 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="78803aa5-15d3-48d6-9524-3d70e0f7dad2") .cluster_size' 00:20:32.037 4096 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:20:32.037 11:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:32.295 7ad94a59-5e10-43de-8030-2e07316d9170 00:20:32.295 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:32.555 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:32.814 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:33.073 11:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:33.073 11:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:33.073 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:33.073 fio-3.35 00:20:33.073 Starting 1 thread 00:20:35.608 00:20:35.608 test: (groupid=0, jobs=1): err= 0: pid=91948: Thu Nov 28 11:51:05 2024 00:20:35.608 read: IOPS=6223, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2009msec) 00:20:35.608 slat (nsec): min=1841, max=393246, avg=2811.52, stdev=4735.59 00:20:35.608 clat (usec): min=3033, max=18341, avg=10716.82, stdev=901.66 00:20:35.608 lat (usec): min=3042, max=18343, avg=10719.63, stdev=901.31 00:20:35.608 clat percentiles (usec): 00:20:35.608 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:20:35.608 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:20:35.608 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:20:35.608 | 99.00th=[12780], 99.50th=[13173], 99.90th=[15533], 99.95th=[17957], 00:20:35.608 | 99.99th=[18220] 00:20:35.608 bw ( KiB/s): min=23856, max=25384, per=99.91%, avg=24872.00, stdev=690.07, samples=4 00:20:35.608 iops : min= 5964, max= 6346, avg=6218.00, stdev=172.52, samples=4 00:20:35.608 write: IOPS=6214, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2009msec); 0 zone resets 00:20:35.608 slat (nsec): min=1931, max=226108, avg=2939.31, stdev=3274.90 00:20:35.608 clat (usec): min=2438, max=16902, avg=9727.72, stdev=860.74 00:20:35.608 lat (usec): min=2452, max=16904, avg=9730.66, stdev=860.55 00:20:35.608 clat percentiles (usec): 00:20:35.608 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:20:35.608 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:35.608 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:20:35.608 | 99.00th=[11731], 99.50th=[12125], 99.90th=[15533], 99.95th=[16581], 00:20:35.608 | 99.99th=[16909] 00:20:35.608 bw ( KiB/s): min=24640, max=25024, per=99.97%, avg=24850.00, stdev=159.78, samples=4 00:20:35.608 iops : min= 6160, max= 6256, avg=6212.50, stdev=39.95, samples=4 00:20:35.608 lat (msec) : 4=0.06%, 10=41.69%, 20=58.25% 00:20:35.608 cpu : usr=72.66%, sys=21.46%, ctx=4, majf=0, minf=20 00:20:35.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:35.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.608 issued rwts: total=12503,12485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.608 00:20:35.608 Run status group 0 (all jobs): 00:20:35.608 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.2MB), run=2009-2009msec 00:20:35.608 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.1MB), run=2009-2009msec 00:20:35.608 11:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:35.868 11:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=091b3e1f-0cb1-4dba-9469-8420a28badc6 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 091b3e1f-0cb1-4dba-9469-8420a28badc6 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=091b3e1f-0cb1-4dba-9469-8420a28badc6 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:20:36.127 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:36.386 { 00:20:36.386 "uuid": "78803aa5-15d3-48d6-9524-3d70e0f7dad2", 00:20:36.386 "name": "lvs_0", 00:20:36.386 "base_bdev": "Nvme0n1", 00:20:36.386 "total_data_clusters": 4, 00:20:36.386 "free_clusters": 0, 00:20:36.386 "block_size": 4096, 00:20:36.386 "cluster_size": 1073741824 00:20:36.386 }, 00:20:36.386 { 00:20:36.386 "uuid": "091b3e1f-0cb1-4dba-9469-8420a28badc6", 00:20:36.386 "name": "lvs_n_0", 00:20:36.386 "base_bdev": "7ad94a59-5e10-43de-8030-2e07316d9170", 00:20:36.386 "total_data_clusters": 1022, 00:20:36.386 "free_clusters": 1022, 00:20:36.386 "block_size": 4096, 00:20:36.386 "cluster_size": 4194304 00:20:36.386 } 00:20:36.386 ]' 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="091b3e1f-0cb1-4dba-9469-8420a28badc6") .free_clusters' 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="091b3e1f-0cb1-4dba-9469-8420a28badc6") .cluster_size' 00:20:36.386 4088 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:20:36.386 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:36.645 4297eda5-d634-425d-8c5e-19c92907b68c 00:20:36.645 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:36.904 11:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:37.163 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.423 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:37.424 11:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:37.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:37.683 fio-3.35 00:20:37.683 Starting 1 thread 00:20:40.220 00:20:40.220 test: (groupid=0, jobs=1): err= 0: pid=92031: Thu Nov 28 11:51:09 2024 00:20:40.220 read: IOPS=5198, BW=20.3MiB/s (21.3MB/s)(40.8MiB/2011msec) 00:20:40.220 slat (nsec): min=1810, max=315482, avg=2704.53, stdev=4546.21 00:20:40.220 clat (usec): min=3554, max=23284, avg=12896.60, stdev=1128.39 00:20:40.220 lat (usec): min=3563, max=23286, avg=12899.30, stdev=1127.98 00:20:40.220 clat percentiles (usec): 00:20:40.220 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:20:40.220 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:20:40.220 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:20:40.220 | 99.00th=[15401], 99.50th=[15926], 99.90th=[19792], 99.95th=[21365], 00:20:40.220 | 99.99th=[23200] 00:20:40.220 bw ( KiB/s): min=20312, max=21088, per=99.91%, avg=20774.00, stdev=363.65, samples=4 00:20:40.220 iops : min= 5078, max= 5272, avg=5193.50, stdev=90.91, samples=4 00:20:40.220 write: IOPS=5197, BW=20.3MiB/s (21.3MB/s)(40.8MiB/2011msec); 0 zone resets 00:20:40.220 slat (nsec): min=1875, max=304387, avg=2800.64, stdev=3818.37 00:20:40.220 clat (usec): min=2593, max=22348, avg=11636.72, stdev=1060.81 00:20:40.220 lat (usec): min=2607, max=22350, avg=11639.52, stdev=1060.62 00:20:40.220 clat percentiles (usec): 00:20:40.220 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:20:40.220 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:20:40.220 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13173], 00:20:40.220 | 99.00th=[13960], 99.50th=[14484], 99.90th=[19792], 99.95th=[21365], 00:20:40.220 | 99.99th=[22414] 00:20:40.220 bw ( KiB/s): min=20544, max=21232, per=99.97%, avg=20786.00, stdev=318.96, samples=4 00:20:40.220 iops : min= 5136, max= 5308, avg=5196.50, stdev=79.74, samples=4 00:20:40.220 lat (msec) : 4=0.04%, 10=2.14%, 20=97.73%, 50=0.09% 00:20:40.220 cpu : usr=75.17%, sys=20.10%, ctx=5, majf=0, minf=20 00:20:40.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:40.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:40.220 issued rwts: total=10454,10453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:40.220 00:20:40.220 Run status group 0 (all jobs): 00:20:40.220 READ: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=40.8MiB (42.8MB), run=2011-2011msec 00:20:40.220 WRITE: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=40.8MiB (42.8MB), run=2011-2011msec 00:20:40.220 11:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:40.221 11:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:20:40.221 11:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:40.479 11:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:40.738 11:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:20:40.996 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:41.307 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.888 rmmod nvme_tcp 00:20:41.888 rmmod nvme_fabrics 00:20:41.888 rmmod nvme_keyring 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 91720 ']' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 91720 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 91720 ']' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 91720 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91720 00:20:41.888 killing process with pid 91720 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91720' 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 91720 00:20:41.888 11:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 91720 00:20:42.147 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:42.148 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:20:42.407 ************************************ 00:20:42.407 END TEST nvmf_fio_host 00:20:42.407 ************************************ 00:20:42.407 00:20:42.407 real 0m19.729s 00:20:42.407 user 1m26.227s 00:20:42.407 sys 0m4.307s 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.407 ************************************ 00:20:42.407 START TEST nvmf_failover 00:20:42.407 ************************************ 00:20:42.407 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:42.668 * Looking for test storage... 00:20:42.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.668 --rc genhtml_branch_coverage=1 00:20:42.668 --rc genhtml_function_coverage=1 00:20:42.668 --rc genhtml_legend=1 00:20:42.668 --rc geninfo_all_blocks=1 00:20:42.668 --rc geninfo_unexecuted_blocks=1 00:20:42.668 00:20:42.668 ' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.668 --rc genhtml_branch_coverage=1 00:20:42.668 --rc genhtml_function_coverage=1 00:20:42.668 --rc genhtml_legend=1 00:20:42.668 --rc geninfo_all_blocks=1 00:20:42.668 --rc geninfo_unexecuted_blocks=1 00:20:42.668 00:20:42.668 ' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.668 --rc genhtml_branch_coverage=1 00:20:42.668 --rc genhtml_function_coverage=1 00:20:42.668 --rc genhtml_legend=1 00:20:42.668 --rc geninfo_all_blocks=1 00:20:42.668 --rc geninfo_unexecuted_blocks=1 00:20:42.668 00:20:42.668 ' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.668 --rc genhtml_branch_coverage=1 00:20:42.668 --rc genhtml_function_coverage=1 00:20:42.668 --rc genhtml_legend=1 00:20:42.668 --rc geninfo_all_blocks=1 00:20:42.668 --rc geninfo_unexecuted_blocks=1 00:20:42.668 00:20:42.668 ' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.668 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:42.669 Cannot find device "nvmf_init_br" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:42.669 Cannot find device "nvmf_init_br2" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:42.669 Cannot find device "nvmf_tgt_br" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.669 Cannot find device "nvmf_tgt_br2" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:42.669 Cannot find device "nvmf_init_br" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:42.669 Cannot find device "nvmf_init_br2" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:42.669 Cannot find device "nvmf_tgt_br" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:42.669 Cannot find device "nvmf_tgt_br2" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:42.669 Cannot find device "nvmf_br" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:42.669 Cannot find device "nvmf_init_if" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:42.669 Cannot find device "nvmf_init_if2" 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:20:42.669 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:42.928 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:42.929 11:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:42.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:42.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:42.929 00:20:42.929 --- 10.0.0.3 ping statistics --- 00:20:42.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.929 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:42.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:42.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:42.929 00:20:42.929 --- 10.0.0.4 ping statistics --- 00:20:42.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.929 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:42.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:42.929 00:20:42.929 --- 10.0.0.1 ping statistics --- 00:20:42.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.929 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:42.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:42.929 00:20:42.929 --- 10.0.0.2 ping statistics --- 00:20:42.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.929 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.929 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=92321 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 92321 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92321 ']' 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.188 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 [2024-11-28 11:51:13.122127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:43.188 [2024-11-28 11:51:13.122427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.188 [2024-11-28 11:51:13.250641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:43.188 [2024-11-28 11:51:13.281594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:43.447 [2024-11-28 11:51:13.332345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.447 [2024-11-28 11:51:13.332708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.447 [2024-11-28 11:51:13.333023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.447 [2024-11-28 11:51:13.333249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.447 [2024-11-28 11:51:13.333404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.447 [2024-11-28 11:51:13.335184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.447 [2024-11-28 11:51:13.335078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.447 [2024-11-28 11:51:13.335172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.447 [2024-11-28 11:51:13.414723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.447 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:43.707 [2024-11-28 11:51:13.822221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.966 11:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:43.966 Malloc0 00:20:44.225 11:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.225 11:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:44.484 11:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:44.743 [2024-11-28 11:51:14.711368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:44.743 11:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:45.002 [2024-11-28 11:51:14.939541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:45.002 11:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:45.261 [2024-11-28 11:51:15.167896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=92371 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 92371 /var/tmp/bdevperf.sock 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92371 ']' 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.261 11:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:46.198 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.198 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:46.198 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:46.587 NVMe0n1 00:20:46.587 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:46.846 00:20:46.846 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.846 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=92389 00:20:46.846 11:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:47.791 11:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:48.052 11:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:51.339 11:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:51.339 00:20:51.339 11:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:51.598 11:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:54.883 11:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:54.883 [2024-11-28 11:51:24.840724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.883 11:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:55.827 11:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:56.086 11:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 92389 00:21:02.663 { 00:21:02.663 "results": [ 00:21:02.663 { 00:21:02.663 "job": "NVMe0n1", 00:21:02.663 "core_mask": "0x1", 00:21:02.663 "workload": "verify", 00:21:02.663 "status": "finished", 00:21:02.663 "verify_range": { 00:21:02.663 "start": 0, 00:21:02.663 "length": 16384 00:21:02.663 }, 00:21:02.663 "queue_depth": 128, 00:21:02.663 "io_size": 4096, 00:21:02.663 "runtime": 15.00753, 00:21:02.663 "iops": 9806.876947772218, 00:21:02.663 "mibps": 38.30811307723523, 00:21:02.663 "io_failed": 3781, 00:21:02.663 "io_timeout": 0, 00:21:02.663 "avg_latency_us": 12698.38680620377, 00:21:02.663 "min_latency_us": 498.9672727272727, 00:21:02.663 "max_latency_us": 15728.64 00:21:02.663 } 00:21:02.663 ], 00:21:02.663 "core_count": 1 00:21:02.663 } 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 92371 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92371 ']' 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92371 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92371 00:21:02.663 killing process with pid 92371 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92371' 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92371 00:21:02.663 11:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92371 00:21:02.663 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:02.663 [2024-11-28 11:51:15.247950] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:02.663 [2024-11-28 11:51:15.248072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92371 ] 00:21:02.663 [2024-11-28 11:51:15.374919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:02.663 [2024-11-28 11:51:15.405731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.663 [2024-11-28 11:51:15.452781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.663 [2024-11-28 11:51:15.527665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:02.663 Running I/O for 15 seconds... 00:21:02.663 7920.00 IOPS, 30.94 MiB/s [2024-11-28T11:51:32.789Z] [2024-11-28 11:51:17.988134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.663 [2024-11-28 11:51:17.988200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.663 [2024-11-28 11:51:17.988253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.663 [2024-11-28 11:51:17.988281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.663 [2024-11-28 11:51:17.988325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2458160 is same with the state(6) to be set 00:21:02.663 [2024-11-28 11:51:17.988572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.663 [2024-11-28 11:51:17.988597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.663 [2024-11-28 11:51:17.988899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.663 [2024-11-28 11:51:17.988912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.988927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.988954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.988972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.988986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.664 [2024-11-28 11:51:17.989881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.664 [2024-11-28 11:51:17.989894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.989907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.989921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.989933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.989947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.989966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.989981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.989994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.665 [2024-11-28 11:51:17.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.665 [2024-11-28 11:51:17.990931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.990944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.990957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.990970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.990984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.990997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.666 [2024-11-28 11:51:17.991803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.666 [2024-11-28 11:51:17.991817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.991982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.991995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:17.992465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2477a40 is same with the state(6) to be set 00:21:02.667 [2024-11-28 11:51:17.992493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.667 [2024-11-28 11:51:17.992503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.667 [2024-11-28 11:51:17.992512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:21:02.667 [2024-11-28 11:51:17.992530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:17.992611] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:21:02.667 [2024-11-28 11:51:17.992634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:02.667 [2024-11-28 11:51:17.995975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:02.667 [2024-11-28 11:51:17.996023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2458160 (9): Bad file descriptor 00:21:02.667 [2024-11-28 11:51:18.025239] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:02.667 8703.00 IOPS, 34.00 MiB/s [2024-11-28T11:51:32.793Z] 9178.67 IOPS, 35.85 MiB/s [2024-11-28T11:51:32.793Z] 9419.25 IOPS, 36.79 MiB/s [2024-11-28T11:51:32.793Z] [2024-11-28 11:51:21.562795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:21.562874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:21.562908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:21.562923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:21.562938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:21.562951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.667 [2024-11-28 11:51:21.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.667 [2024-11-28 11:51:21.562977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.562991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.668 [2024-11-28 11:51:21.563557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.668 [2024-11-28 11:51:21.563694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.668 [2024-11-28 11:51:21.563707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.563980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.563994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.669 [2024-11-28 11:51:21.564272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.669 [2024-11-28 11:51:21.564556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.669 [2024-11-28 11:51:21.564570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.670 [2024-11-28 11:51:21.564832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.564858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.564886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.564914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.564941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.564983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.564997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.670 [2024-11-28 11:51:21.565502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.670 [2024-11-28 11:51:21.565515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.671 [2024-11-28 11:51:21.565850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.565982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.565997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.671 [2024-11-28 11:51:21.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247bc90 is same with the state(6) to be set 00:21:02.671 [2024-11-28 11:51:21.566290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.671 [2024-11-28 11:51:21.566311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.671 [2024-11-28 11:51:21.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106152 len:8 PRP1 0x0 PRP2 0x0 00:21:02.671 [2024-11-28 11:51:21.566333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.671 [2024-11-28 11:51:21.566355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.671 [2024-11-28 11:51:21.566365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106672 len:8 PRP1 0x0 PRP2 0x0 00:21:02.671 [2024-11-28 11:51:21.566376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.671 [2024-11-28 11:51:21.566388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.671 [2024-11-28 11:51:21.566396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106680 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106688 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106696 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106704 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106712 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106720 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.672 [2024-11-28 11:51:21.566698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.672 [2024-11-28 11:51:21.566707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106728 len:8 PRP1 0x0 PRP2 0x0 00:21:02.672 [2024-11-28 11:51:21.566718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566783] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:21:02.672 [2024-11-28 11:51:21.566844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.672 [2024-11-28 11:51:21.566863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.672 [2024-11-28 11:51:21.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.672 [2024-11-28 11:51:21.566918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.672 [2024-11-28 11:51:21.566943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:21.566964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:02.672 [2024-11-28 11:51:21.567008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2458160 (9): Bad file descriptor 00:21:02.672 [2024-11-28 11:51:21.570265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:02.672 [2024-11-28 11:51:21.595167] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:02.672 9469.80 IOPS, 36.99 MiB/s [2024-11-28T11:51:32.798Z] 9579.50 IOPS, 37.42 MiB/s [2024-11-28T11:51:32.798Z] 9660.14 IOPS, 37.73 MiB/s [2024-11-28T11:51:32.798Z] 9634.62 IOPS, 37.64 MiB/s [2024-11-28T11:51:32.798Z] 9600.56 IOPS, 37.50 MiB/s [2024-11-28T11:51:32.798Z] [2024-11-28 11:51:26.124923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.124982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.672 [2024-11-28 11:51:26.125184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.672 [2024-11-28 11:51:26.125210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.672 [2024-11-28 11:51:26.125237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.672 [2024-11-28 11:51:26.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.672 [2024-11-28 11:51:26.125335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.672 [2024-11-28 11:51:26.125348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.125360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.125384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.125409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.125434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.125967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.125986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.673 [2024-11-28 11:51:26.126195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.673 [2024-11-28 11:51:26.126208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.673 [2024-11-28 11:51:26.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.674 [2024-11-28 11:51:26.126824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.126986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.126999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.127026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.127038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.674 [2024-11-28 11:51:26.127052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.674 [2024-11-28 11:51:26.127065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.675 [2024-11-28 11:51:26.127688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.675 [2024-11-28 11:51:26.127920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.675 [2024-11-28 11:51:26.127932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.127946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.127973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.127992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.676 [2024-11-28 11:51:26.128431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.676 [2024-11-28 11:51:26.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b7db0 is same with the state(6) to be set 00:21:02.676 [2024-11-28 11:51:26.128692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.676 [2024-11-28 11:51:26.128702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.676 [2024-11-28 11:51:26.128711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76304 len:8 PRP1 0x0 PRP2 0x0 00:21:02.676 [2024-11-28 11:51:26.128723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128792] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:21:02.676 [2024-11-28 11:51:26.128848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.676 [2024-11-28 11:51:26.128867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.676 [2024-11-28 11:51:26.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.676 [2024-11-28 11:51:26.128918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.676 [2024-11-28 11:51:26.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.676 [2024-11-28 11:51:26.128956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:02.677 [2024-11-28 11:51:26.132221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:02.677 [2024-11-28 11:51:26.132259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2458160 (9): Bad file descriptor 00:21:02.677 [2024-11-28 11:51:26.157903] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:02.677 9603.40 IOPS, 37.51 MiB/s [2024-11-28T11:51:32.803Z] 9656.09 IOPS, 37.72 MiB/s [2024-11-28T11:51:32.803Z] 9701.67 IOPS, 37.90 MiB/s [2024-11-28T11:51:32.803Z] 9741.85 IOPS, 38.05 MiB/s [2024-11-28T11:51:32.803Z] 9776.86 IOPS, 38.19 MiB/s [2024-11-28T11:51:32.803Z] 9805.80 IOPS, 38.30 MiB/s 00:21:02.677 Latency(us) 00:21:02.677 [2024-11-28T11:51:32.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.677 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:02.677 Verification LBA range: start 0x0 length 0x4000 00:21:02.677 NVMe0n1 : 15.01 9806.88 38.31 251.94 0.00 12698.39 498.97 15728.64 00:21:02.677 [2024-11-28T11:51:32.803Z] =================================================================================================================== 00:21:02.677 [2024-11-28T11:51:32.803Z] Total : 9806.88 38.31 251.94 0.00 12698.39 498.97 15728.64 00:21:02.677 Received shutdown signal, test time was about 15.000000 seconds 00:21:02.677 00:21:02.677 Latency(us) 00:21:02.677 [2024-11-28T11:51:32.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.677 [2024-11-28T11:51:32.803Z] =================================================================================================================== 00:21:02.677 [2024-11-28T11:51:32.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=92562 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 92562 /var/tmp/bdevperf.sock 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92562 ']' 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.677 11:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:03.245 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.245 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:03.245 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:03.503 [2024-11-28 11:51:33.401105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:03.503 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:03.503 [2024-11-28 11:51:33.625226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:03.763 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:04.021 NVMe0n1 00:21:04.021 11:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:04.280 00:21:04.280 11:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:04.539 00:21:04.539 11:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:04.539 11:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.798 11:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.057 11:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:08.342 11:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.342 11:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:08.342 11:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=92639 00:21:08.342 11:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.342 11:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 92639 00:21:09.721 { 00:21:09.721 "results": [ 00:21:09.721 { 00:21:09.721 "job": "NVMe0n1", 00:21:09.721 "core_mask": "0x1", 00:21:09.721 "workload": "verify", 00:21:09.721 "status": "finished", 00:21:09.721 "verify_range": { 00:21:09.721 "start": 0, 00:21:09.721 "length": 16384 00:21:09.721 }, 00:21:09.721 "queue_depth": 128, 00:21:09.721 "io_size": 4096, 00:21:09.721 "runtime": 1.008119, 00:21:09.721 "iops": 9506.814175707432, 00:21:09.721 "mibps": 37.135992873857155, 00:21:09.721 "io_failed": 0, 00:21:09.721 "io_timeout": 0, 00:21:09.721 "avg_latency_us": 13388.692183942934, 00:21:09.721 "min_latency_us": 923.4618181818182, 00:21:09.721 "max_latency_us": 15609.483636363637 00:21:09.721 } 00:21:09.721 ], 00:21:09.721 "core_count": 1 00:21:09.721 } 00:21:09.721 11:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:09.721 [2024-11-28 11:51:32.234603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:09.721 [2024-11-28 11:51:32.234735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92562 ] 00:21:09.721 [2024-11-28 11:51:32.364328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:09.721 [2024-11-28 11:51:32.385292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.721 [2024-11-28 11:51:32.426213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.721 [2024-11-28 11:51:32.494078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:09.721 [2024-11-28 11:51:35.024252] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:21:09.721 [2024-11-28 11:51:35.024357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.721 [2024-11-28 11:51:35.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.721 [2024-11-28 11:51:35.024396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.721 [2024-11-28 11:51:35.024408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.721 [2024-11-28 11:51:35.024420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.721 [2024-11-28 11:51:35.024433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.721 [2024-11-28 11:51:35.024445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.721 [2024-11-28 11:51:35.024457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.721 [2024-11-28 11:51:35.024470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:09.721 [2024-11-28 11:51:35.024508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:09.721 [2024-11-28 11:51:35.024534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffe160 (9): Bad file descriptor 00:21:09.721 [2024-11-28 11:51:35.033220] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:09.721 Running I/O for 1 seconds... 00:21:09.721 9448.00 IOPS, 36.91 MiB/s 00:21:09.721 Latency(us) 00:21:09.721 [2024-11-28T11:51:39.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.721 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:09.721 Verification LBA range: start 0x0 length 0x4000 00:21:09.721 NVMe0n1 : 1.01 9506.81 37.14 0.00 0.00 13388.69 923.46 15609.48 00:21:09.721 [2024-11-28T11:51:39.847Z] =================================================================================================================== 00:21:09.721 [2024-11-28T11:51:39.847Z] Total : 9506.81 37.14 0.00 0.00 13388.69 923.46 15609.48 00:21:09.721 11:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.721 11:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:09.721 11:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.981 11:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.981 11:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:10.240 11:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.499 11:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 92562 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92562 ']' 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92562 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92562 00:21:13.787 killing process with pid 92562 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92562' 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92562 00:21:13.787 11:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92562 00:21:14.046 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:14.046 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.305 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:14.305 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:14.305 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:14.305 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.305 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:14.306 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.306 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:14.306 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.306 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.306 rmmod nvme_tcp 00:21:14.306 rmmod nvme_fabrics 00:21:14.564 rmmod nvme_keyring 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 92321 ']' 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 92321 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92321 ']' 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92321 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:14.564 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92321 00:21:14.565 killing process with pid 92321 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92321' 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92321 00:21:14.565 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92321 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:14.824 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.089 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.089 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:15.089 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.089 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.089 11:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.089 11:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:21:15.089 00:21:15.089 real 0m32.551s 00:21:15.089 user 2m5.661s 00:21:15.089 sys 0m5.283s 00:21:15.089 11:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.089 ************************************ 00:21:15.089 11:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:15.089 END TEST nvmf_failover 00:21:15.089 ************************************ 00:21:15.089 11:51:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.090 ************************************ 00:21:15.090 START TEST nvmf_host_discovery 00:21:15.090 ************************************ 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:15.090 * Looking for test storage... 00:21:15.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:15.090 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.355 --rc genhtml_branch_coverage=1 00:21:15.355 --rc genhtml_function_coverage=1 00:21:15.355 --rc genhtml_legend=1 00:21:15.355 --rc geninfo_all_blocks=1 00:21:15.355 --rc geninfo_unexecuted_blocks=1 00:21:15.355 00:21:15.355 ' 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.355 --rc genhtml_branch_coverage=1 00:21:15.355 --rc genhtml_function_coverage=1 00:21:15.355 --rc genhtml_legend=1 00:21:15.355 --rc geninfo_all_blocks=1 00:21:15.355 --rc geninfo_unexecuted_blocks=1 00:21:15.355 00:21:15.355 ' 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.355 --rc genhtml_branch_coverage=1 00:21:15.355 --rc genhtml_function_coverage=1 00:21:15.355 --rc genhtml_legend=1 00:21:15.355 --rc geninfo_all_blocks=1 00:21:15.355 --rc geninfo_unexecuted_blocks=1 00:21:15.355 00:21:15.355 ' 00:21:15.355 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:15.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.356 --rc genhtml_branch_coverage=1 00:21:15.356 --rc genhtml_function_coverage=1 00:21:15.356 --rc genhtml_legend=1 00:21:15.356 --rc geninfo_all_blocks=1 00:21:15.356 --rc geninfo_unexecuted_blocks=1 00:21:15.356 00:21:15.356 ' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:15.356 Cannot find device "nvmf_init_br" 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:15.356 Cannot find device "nvmf_init_br2" 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:15.356 Cannot find device "nvmf_tgt_br" 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:21:15.356 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.357 Cannot find device "nvmf_tgt_br2" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:15.357 Cannot find device "nvmf_init_br" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:15.357 Cannot find device "nvmf_init_br2" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:15.357 Cannot find device "nvmf_tgt_br" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:15.357 Cannot find device "nvmf_tgt_br2" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:15.357 Cannot find device "nvmf_br" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:15.357 Cannot find device "nvmf_init_if" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:15.357 Cannot find device "nvmf_init_if2" 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:15.357 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:15.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:15.615 00:21:15.615 --- 10.0.0.3 ping statistics --- 00:21:15.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.615 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:15.615 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:15.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:15.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:21:15.615 00:21:15.616 --- 10.0.0.4 ping statistics --- 00:21:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.616 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:15.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:15.616 00:21:15.616 --- 10.0.0.1 ping statistics --- 00:21:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.616 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:15.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:15.616 00:21:15.616 --- 10.0.0.2 ping statistics --- 00:21:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.616 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=92972 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 92972 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92972 ']' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.616 11:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.616 [2024-11-28 11:51:45.726986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:15.616 [2024-11-28 11:51:45.727070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.874 [2024-11-28 11:51:45.854040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:15.874 [2024-11-28 11:51:45.884322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.874 [2024-11-28 11:51:45.925258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.874 [2024-11-28 11:51:45.925325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.874 [2024-11-28 11:51:45.925351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.874 [2024-11-28 11:51:45.925361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.874 [2024-11-28 11:51:45.925370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.874 [2024-11-28 11:51:45.925855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.874 [2024-11-28 11:51:45.983534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 [2024-11-28 11:51:46.097595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 [2024-11-28 11:51:46.105733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 null0 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 null1 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=92992 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 92992 /tmp/host.sock 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92992 ']' 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.132 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.132 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.132 [2024-11-28 11:51:46.199119] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:16.132 [2024-11-28 11:51:46.199225] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92992 ] 00:21:16.391 [2024-11-28 11:51:46.324924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:16.391 [2024-11-28 11:51:46.351187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.391 [2024-11-28 11:51:46.384846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.391 [2024-11-28 11:51:46.437476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.391 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.655 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:16.656 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 [2024-11-28 11:51:46.874195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.915 11:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.915 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.174 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:21:17.175 11:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:21:17.433 [2024-11-28 11:51:47.521258] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:17.433 [2024-11-28 11:51:47.521278] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:17.433 [2024-11-28 11:51:47.521504] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:17.433 [2024-11-28 11:51:47.527299] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:17.691 [2024-11-28 11:51:47.581854] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:17.691 [2024-11-28 11:51:47.582966] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24ea7b0:1 started. 00:21:17.691 [2024-11-28 11:51:47.584554] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:17.691 [2024-11-28 11:51:47.584584] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:17.691 [2024-11-28 11:51:47.589448] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24ea7b0 was disconnected and freed. delete nvme_qpair. 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.260 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.261 [2024-11-28 11:51:48.333227] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24f8170:1 started. 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.261 [2024-11-28 11:51:48.339346] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24f8170 was disconnected and freed. delete nvme_qpair. 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.261 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.520 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.520 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.520 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:18.520 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.521 [2024-11-28 11:51:48.439799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:18.521 [2024-11-28 11:51:48.440179] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:18.521 [2024-11-28 11:51:48.440202] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.521 [2024-11-28 11:51:48.446200] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.521 [2024-11-28 11:51:48.504600] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:18.521 [2024-11-28 11:51:48.504643] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:18.521 [2024-11-28 11:51:48.504654] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:18.521 [2024-11-28 11:51:48.504660] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:18.521 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.522 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.781 [2024-11-28 11:51:48.672509] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:18.781 [2024-11-28 11:51:48.672537] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.781 [2024-11-28 11:51:48.678526] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:18.781 [2024-11-28 11:51:48.678676] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:18.781 [2024-11-28 11:51:48.678852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.781 [2024-11-28 11:51:48.678882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.781 [2024-11-28 11:51:48.678910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.781 [2024-11-28 11:51:48.678919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.781 [2024-11-28 11:51:48.678943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.781 [2024-11-28 11:51:48.678965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.781 [2024-11-28 11:51:48.678974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.781 [2024-11-28 11:51:48.678982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.781 [2024-11-28 11:51:48.678990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24babc0 is same with the state(6) to be set 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.781 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.782 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.041 11:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.041 11:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.976 [2024-11-28 11:51:50.061117] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:19.976 [2024-11-28 11:51:50.061137] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:19.976 [2024-11-28 11:51:50.061152] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:19.976 [2024-11-28 11:51:50.067155] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:21:20.235 [2024-11-28 11:51:50.125527] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:20.235 [2024-11-28 11:51:50.126327] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2625650:1 started. 00:21:20.235 [2024-11-28 11:51:50.128405] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:20.235 [2024-11-28 11:51:50.128578] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.235 [2024-11-28 11:51:50.130057] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2625650 was disconnected and freed. delete nvme_qpair. 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 request: 00:21:20.235 { 00:21:20.235 "name": "nvme", 00:21:20.235 "trtype": "tcp", 00:21:20.235 "traddr": "10.0.0.3", 00:21:20.235 "adrfam": "ipv4", 00:21:20.235 "trsvcid": "8009", 00:21:20.235 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:20.235 "wait_for_attach": true, 00:21:20.235 "method": "bdev_nvme_start_discovery", 00:21:20.235 "req_id": 1 00:21:20.235 } 00:21:20.235 Got JSON-RPC error response 00:21:20.235 response: 00:21:20.235 { 00:21:20.235 "code": -17, 00:21:20.235 "message": "File exists" 00:21:20.235 } 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.235 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 request: 00:21:20.235 { 00:21:20.235 "name": "nvme_second", 00:21:20.235 "trtype": "tcp", 00:21:20.235 "traddr": "10.0.0.3", 00:21:20.236 "adrfam": "ipv4", 00:21:20.236 "trsvcid": "8009", 00:21:20.236 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:20.236 "wait_for_attach": true, 00:21:20.236 "method": "bdev_nvme_start_discovery", 00:21:20.236 "req_id": 1 00:21:20.236 } 00:21:20.236 Got JSON-RPC error response 00:21:20.236 response: 00:21:20.236 { 00:21:20.236 "code": -17, 00:21:20.236 "message": "File exists" 00:21:20.236 } 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:20.236 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.495 11:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.430 [2024-11-28 11:51:51.404944] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.430 [2024-11-28 11:51:51.405003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9610 with addr=10.0.0.3, port=8010 00:21:21.430 [2024-11-28 11:51:51.405021] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:21.430 [2024-11-28 11:51:51.405030] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:21.430 [2024-11-28 11:51:51.405037] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:21:22.368 [2024-11-28 11:51:52.404937] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.368 [2024-11-28 11:51:52.405123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9610 with addr=10.0.0.3, port=8010 00:21:22.368 [2024-11-28 11:51:52.405149] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:22.368 [2024-11-28 11:51:52.405158] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:22.368 [2024-11-28 11:51:52.405166] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:21:23.340 [2024-11-28 11:51:53.404871] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:21:23.340 request: 00:21:23.340 { 00:21:23.340 "name": "nvme_second", 00:21:23.340 "trtype": "tcp", 00:21:23.340 "traddr": "10.0.0.3", 00:21:23.340 "adrfam": "ipv4", 00:21:23.340 "trsvcid": "8010", 00:21:23.340 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:23.340 "wait_for_attach": false, 00:21:23.340 "attach_timeout_ms": 3000, 00:21:23.340 "method": "bdev_nvme_start_discovery", 00:21:23.340 "req_id": 1 00:21:23.340 } 00:21:23.340 Got JSON-RPC error response 00:21:23.340 response: 00:21:23.340 { 00:21:23.340 "code": -110, 00:21:23.340 "message": "Connection timed out" 00:21:23.340 } 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 92992 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.340 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:21:23.599 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.599 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:21:23.599 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.599 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.599 rmmod nvme_tcp 00:21:23.599 rmmod nvme_fabrics 00:21:23.599 rmmod nvme_keyring 00:21:23.599 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 92972 ']' 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 92972 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 92972 ']' 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 92972 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92972 00:21:23.600 killing process with pid 92972 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92972' 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 92972 00:21:23.600 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 92972 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:23.860 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:24.119 11:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:21:24.119 00:21:24.119 real 0m9.034s 00:21:24.119 user 0m16.933s 00:21:24.119 sys 0m1.990s 00:21:24.119 ************************************ 00:21:24.119 END TEST nvmf_host_discovery 00:21:24.119 ************************************ 00:21:24.119 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.120 ************************************ 00:21:24.120 START TEST nvmf_host_multipath_status 00:21:24.120 ************************************ 00:21:24.120 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:24.120 * Looking for test storage... 00:21:24.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:24.380 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.381 --rc genhtml_branch_coverage=1 00:21:24.381 --rc genhtml_function_coverage=1 00:21:24.381 --rc genhtml_legend=1 00:21:24.381 --rc geninfo_all_blocks=1 00:21:24.381 --rc geninfo_unexecuted_blocks=1 00:21:24.381 00:21:24.381 ' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.381 --rc genhtml_branch_coverage=1 00:21:24.381 --rc genhtml_function_coverage=1 00:21:24.381 --rc genhtml_legend=1 00:21:24.381 --rc geninfo_all_blocks=1 00:21:24.381 --rc geninfo_unexecuted_blocks=1 00:21:24.381 00:21:24.381 ' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.381 --rc genhtml_branch_coverage=1 00:21:24.381 --rc genhtml_function_coverage=1 00:21:24.381 --rc genhtml_legend=1 00:21:24.381 --rc geninfo_all_blocks=1 00:21:24.381 --rc geninfo_unexecuted_blocks=1 00:21:24.381 00:21:24.381 ' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.381 --rc genhtml_branch_coverage=1 00:21:24.381 --rc genhtml_function_coverage=1 00:21:24.381 --rc genhtml_legend=1 00:21:24.381 --rc geninfo_all_blocks=1 00:21:24.381 --rc geninfo_unexecuted_blocks=1 00:21:24.381 00:21:24.381 ' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:24.381 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:24.382 Cannot find device "nvmf_init_br" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:24.382 Cannot find device "nvmf_init_br2" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:24.382 Cannot find device "nvmf_tgt_br" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.382 Cannot find device "nvmf_tgt_br2" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:24.382 Cannot find device "nvmf_init_br" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:24.382 Cannot find device "nvmf_init_br2" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:24.382 Cannot find device "nvmf_tgt_br" 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:21:24.382 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:24.382 Cannot find device "nvmf_tgt_br2" 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:24.641 Cannot find device "nvmf_br" 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:24.641 Cannot find device "nvmf_init_if" 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:24.641 Cannot find device "nvmf_init_if2" 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:24.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:24.641 00:21:24.641 --- 10.0.0.3 ping statistics --- 00:21:24.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.641 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:24.641 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:24.641 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:21:24.641 00:21:24.641 --- 10.0.0.4 ping statistics --- 00:21:24.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.641 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:24.641 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:24.902 00:21:24.902 --- 10.0.0.1 ping statistics --- 00:21:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.902 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:24.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:24.902 00:21:24.902 --- 10.0.0.2 ping statistics --- 00:21:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.902 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=93486 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 93486 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93486 ']' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.902 11:51:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:24.902 [2024-11-28 11:51:54.871556] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:24.902 [2024-11-28 11:51:54.871654] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.902 [2024-11-28 11:51:54.993497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:24.902 [2024-11-28 11:51:55.010842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:25.177 [2024-11-28 11:51:55.046751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.177 [2024-11-28 11:51:55.047012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.177 [2024-11-28 11:51:55.047081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.177 [2024-11-28 11:51:55.047145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.177 [2024-11-28 11:51:55.047205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.178 [2024-11-28 11:51:55.048286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.178 [2024-11-28 11:51:55.048568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.178 [2024-11-28 11:51:55.108400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=93486 00:21:25.178 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:25.443 [2024-11-28 11:51:55.513171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.443 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:26.010 Malloc0 00:21:26.010 11:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:26.010 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.270 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:26.529 [2024-11-28 11:51:56.540944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:26.529 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:26.788 [2024-11-28 11:51:56.749162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:26.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=93528 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 93528 /var/tmp/bdevperf.sock 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93528 ']' 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.788 11:51:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:27.726 11:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.726 11:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:27.726 11:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:27.985 11:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:28.245 Nvme0n1 00:21:28.245 11:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:28.504 Nvme0n1 00:21:28.504 11:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.504 11:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:30.411 11:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:30.411 11:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:30.670 11:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:30.928 11:52:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:31.861 11:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:31.861 11:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:31.861 11:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.861 11:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:32.121 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.121 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:32.121 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.121 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:32.689 11:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.948 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.948 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:32.948 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.948 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:33.205 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.205 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:33.205 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.205 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.464 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.464 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:33.464 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:33.723 11:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:33.982 11:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:34.920 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:34.920 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:34.920 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.920 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.496 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:35.755 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.755 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:35.755 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.755 11:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:36.013 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.013 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:36.013 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.013 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:36.273 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.273 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:36.273 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.273 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:36.532 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.532 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:36.532 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:36.791 11:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:37.051 11:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:37.989 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:37.989 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:37.989 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.989 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:38.249 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.249 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:38.249 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.249 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:38.818 11:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.078 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.078 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:39.078 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.078 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:39.337 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.337 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:39.337 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.337 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:39.596 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.596 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:39.596 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:39.855 11:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:40.114 11:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:41.051 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:41.051 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:41.051 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.052 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.619 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:41.879 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.879 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:41.879 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.879 11:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:42.138 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.138 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:42.138 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.138 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:42.401 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.401 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:42.401 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.401 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:42.660 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.660 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:42.660 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:42.920 11:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:43.178 11:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:44.115 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:44.115 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:44.115 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:44.115 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.374 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.374 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:44.374 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.374 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:44.632 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.632 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:44.632 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.632 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:44.890 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.890 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:44.890 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.890 11:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:45.148 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.148 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:45.148 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.148 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.405 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.405 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:45.405 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.405 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:45.663 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.663 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:45.663 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:45.922 11:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:46.181 11:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:47.118 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:47.118 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:47.118 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.118 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:47.377 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:47.377 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:47.377 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:47.377 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.637 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.637 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:47.637 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.637 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:47.897 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.897 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:47.897 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.897 11:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:48.156 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.156 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:48.156 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:48.156 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.415 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:48.415 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:48.415 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.415 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:48.674 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.674 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:48.933 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:48.933 11:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:49.193 11:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:49.453 11:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:50.391 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:50.391 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:50.391 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.391 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:50.649 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.649 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:50.649 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:50.649 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.909 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.909 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:50.909 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.909 11:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:51.168 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.168 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:51.168 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:51.168 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.438 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:51.735 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.736 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:51.736 11:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:51.997 11:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:52.256 11:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:53.191 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:53.191 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:53.191 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.191 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:53.450 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:53.450 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:53.450 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.450 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:53.709 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.709 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:53.709 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.709 11:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:53.968 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.968 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:53.968 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.968 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:54.227 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.227 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:54.227 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.227 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:54.485 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.485 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:54.485 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.485 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:54.744 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.744 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:54.744 11:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:55.003 11:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:55.261 11:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:56.196 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:56.196 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.196 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.196 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.456 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.456 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:56.456 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:56.456 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.715 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.715 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:56.715 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.715 11:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:56.973 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.973 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:56.973 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.973 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.231 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.231 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.231 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.231 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.490 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.490 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:57.490 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.490 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:57.748 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.748 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:57.748 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:58.007 11:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:58.265 11:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:59.201 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:59.201 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:59.201 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.201 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:59.460 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.460 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:59.460 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:59.460 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.719 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.719 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:59.719 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:59.719 11:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:00.286 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.545 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.545 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:00.545 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.545 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 93528 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93528 ']' 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93528 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93528 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:00.804 killing process with pid 93528 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93528' 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93528 00:22:00.804 11:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93528 00:22:00.804 { 00:22:00.804 "results": [ 00:22:00.804 { 00:22:00.804 "job": "Nvme0n1", 00:22:00.804 "core_mask": "0x4", 00:22:00.804 "workload": "verify", 00:22:00.805 "status": "terminated", 00:22:00.805 "verify_range": { 00:22:00.805 "start": 0, 00:22:00.805 "length": 16384 00:22:00.805 }, 00:22:00.805 "queue_depth": 128, 00:22:00.805 "io_size": 4096, 00:22:00.805 "runtime": 32.212913, 00:22:00.805 "iops": 8318.713678579767, 00:22:00.805 "mibps": 32.49497530695221, 00:22:00.805 "io_failed": 0, 00:22:00.805 "io_timeout": 0, 00:22:00.805 "avg_latency_us": 15361.18217846638, 00:22:00.805 "min_latency_us": 841.5418181818181, 00:22:00.805 "max_latency_us": 4057035.869090909 00:22:00.805 } 00:22:00.805 ], 00:22:00.805 "core_count": 1 00:22:00.805 } 00:22:01.070 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 93528 00:22:01.070 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.070 [2024-11-28 11:51:56.828108] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:01.070 [2024-11-28 11:51:56.828207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93528 ] 00:22:01.070 [2024-11-28 11:51:56.954479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:01.070 [2024-11-28 11:51:56.972638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.070 [2024-11-28 11:51:57.012896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.070 [2024-11-28 11:51:57.064216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:01.070 Running I/O for 90 seconds... 00:22:01.070 7573.00 IOPS, 29.58 MiB/s [2024-11-28T11:52:31.196Z] 7626.50 IOPS, 29.79 MiB/s [2024-11-28T11:52:31.196Z] 7644.00 IOPS, 29.86 MiB/s [2024-11-28T11:52:31.196Z] 7589.25 IOPS, 29.65 MiB/s [2024-11-28T11:52:31.196Z] 7804.60 IOPS, 30.49 MiB/s [2024-11-28T11:52:31.196Z] 8090.83 IOPS, 31.60 MiB/s [2024-11-28T11:52:31.196Z] 8280.14 IOPS, 32.34 MiB/s [2024-11-28T11:52:31.196Z] 8420.12 IOPS, 32.89 MiB/s [2024-11-28T11:52:31.196Z] 8550.22 IOPS, 33.40 MiB/s [2024-11-28T11:52:31.196Z] 8686.40 IOPS, 33.93 MiB/s [2024-11-28T11:52:31.196Z] 8780.27 IOPS, 34.30 MiB/s [2024-11-28T11:52:31.196Z] 8884.58 IOPS, 34.71 MiB/s [2024-11-28T11:52:31.196Z] 8979.00 IOPS, 35.07 MiB/s [2024-11-28T11:52:31.196Z] 9043.29 IOPS, 35.33 MiB/s [2024-11-28T11:52:31.196Z] [2024-11-28 11:52:12.900956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.070 [2024-11-28 11:52:12.901272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:01.070 [2024-11-28 11:52:12.901291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.901944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.901958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.071 [2024-11-28 11:52:12.904967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.904985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.071 [2024-11-28 11:52:12.905210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:01.071 [2024-11-28 11:52:12.905228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.905242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.905558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.907858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.907888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.907940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.907960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.907975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.907994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.908008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.908040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.908073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.908106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.908139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.908445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.072 [2024-11-28 11:52:12.908460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.072 [2024-11-28 11:52:12.910965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:01.072 [2024-11-28 11:52:12.910984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.911861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.073 [2024-11-28 11:52:12.912283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.912407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.912444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.912479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:01.073 [2024-11-28 11:52:12.912499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.073 [2024-11-28 11:52:12.912514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.912549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.912583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.912936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.912975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.912995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.074 [2024-11-28 11:52:12.913541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.074 [2024-11-28 11:52:12.913930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:01.074 [2024-11-28 11:52:12.913949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.913984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.913998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.914512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.914970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.914985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.915020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.915055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.915090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.915124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.075 [2024-11-28 11:52:12.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.915209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.915245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.075 [2024-11-28 11:52:12.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:01.075 [2024-11-28 11:52:12.915300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.915356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.915380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.915395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.915418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.915433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.915454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.915470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.916965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.916986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.076 [2024-11-28 11:52:12.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.917988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.076 [2024-11-28 11:52:12.918574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:01.076 [2024-11-28 11:52:12.918596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.918649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.918975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.918990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.919487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.919549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.919565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.077 [2024-11-28 11:52:12.931872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.931940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.931971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.931992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.077 [2024-11-28 11:52:12.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.077 [2024-11-28 11:52:12.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.932713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.932761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.932860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.932909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.932987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933155] nvme_qp 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.078 air.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.078 [2024-11-28 11:52:12.933930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.933958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.933979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:01.078 [2024-11-28 11:52:12.934313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.078 [2024-11-28 11:52:12.934336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.934762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.934811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.934859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.934910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.934958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.934986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.935006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.935064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.935115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.079 [2024-11-28 11:52:12.935163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:01.079 [2024-11-28 11:52:12.936517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:12.936555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:01.079 8647.33 IOPS, 33.78 MiB/s [2024-11-28T11:52:31.205Z] 8106.88 IOPS, 31.67 MiB/s [2024-11-28T11:52:31.205Z] 7630.00 IOPS, 29.80 MiB/s [2024-11-28T11:52:31.205Z] 7206.11 IOPS, 28.15 MiB/s [2024-11-28T11:52:31.205Z] 7155.42 IOPS, 27.95 MiB/s [2024-11-28T11:52:31.205Z] 7272.05 IOPS, 28.41 MiB/s [2024-11-28T11:52:31.205Z] 7397.90 IOPS, 28.90 MiB/s [2024-11-28T11:52:31.205Z] 7536.68 IOPS, 29.44 MiB/s [2024-11-28T11:52:31.205Z] 7651.87 IOPS, 29.89 MiB/s [2024-11-28T11:52:31.205Z] 7748.46 IOPS, 30.27 MiB/s [2024-11-28T11:52:31.205Z] 7830.36 IOPS, 30.59 MiB/s [2024-11-28T11:52:31.205Z] 7899.35 IOPS, 30.86 MiB/s [2024-11-28T11:52:31.205Z] 7973.85 IOPS, 31.15 MiB/s [2024-11-28T11:52:31.205Z] 8060.89 IOPS, 31.49 MiB/s [2024-11-28T11:52:31.205Z] 8139.72 IOPS, 31.80 MiB/s [2024-11-28T11:52:31.205Z] [2024-11-28 11:52:28.258481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.079 [2024-11-28 11:52:28.258536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.258673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.258704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.258767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.258970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.258988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.259558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.259609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.080 [2024-11-28 11:52:28.259622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.260684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.260717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:01.080 [2024-11-28 11:52:28.260742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.080 [2024-11-28 11:52:28.260757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:01.080 8205.40 IOPS, 32.05 MiB/s [2024-11-28T11:52:31.206Z] 8264.32 IOPS, 32.28 MiB/s [2024-11-28T11:52:31.206Z] 8310.56 IOPS, 32.46 MiB/s [2024-11-28T11:52:31.206Z] Received shutdown signal, test time was about 32.213553 seconds 00:22:01.080 00:22:01.080 Latency(us) 00:22:01.080 [2024-11-28T11:52:31.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.080 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.080 Verification LBA range: start 0x0 length 0x4000 00:22:01.080 Nvme0n1 : 32.21 8318.71 32.49 0.00 0.00 15361.18 841.54 4057035.87 00:22:01.080 [2024-11-28T11:52:31.206Z] =================================================================================================================== 00:22:01.080 [2024-11-28T11:52:31.206Z] Total : 8318.71 32.49 0.00 0.00 15361.18 841.54 4057035.87 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.340 rmmod nvme_tcp 00:22:01.340 rmmod nvme_fabrics 00:22:01.340 rmmod nvme_keyring 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 93486 ']' 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 93486 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93486 ']' 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93486 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93486 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.340 killing process with pid 93486 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93486' 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93486 00:22:01.340 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93486 00:22:01.599 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.599 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:01.600 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:22:01.860 00:22:01.860 real 0m37.685s 00:22:01.860 user 2m0.600s 00:22:01.860 sys 0m11.494s 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.860 ************************************ 00:22:01.860 END TEST nvmf_host_multipath_status 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.860 ************************************ 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.860 ************************************ 00:22:01.860 START TEST nvmf_discovery_remove_ifc 00:22:01.860 ************************************ 00:22:01.860 11:52:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:02.120 * Looking for test storage... 00:22:02.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:02.120 11:52:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.120 11:52:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.120 11:52:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.120 --rc genhtml_branch_coverage=1 00:22:02.120 --rc genhtml_function_coverage=1 00:22:02.120 --rc genhtml_legend=1 00:22:02.120 --rc geninfo_all_blocks=1 00:22:02.120 --rc geninfo_unexecuted_blocks=1 00:22:02.120 00:22:02.120 ' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.120 --rc genhtml_branch_coverage=1 00:22:02.120 --rc genhtml_function_coverage=1 00:22:02.120 --rc genhtml_legend=1 00:22:02.120 --rc geninfo_all_blocks=1 00:22:02.120 --rc geninfo_unexecuted_blocks=1 00:22:02.120 00:22:02.120 ' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.120 --rc genhtml_branch_coverage=1 00:22:02.120 --rc genhtml_function_coverage=1 00:22:02.120 --rc genhtml_legend=1 00:22:02.120 --rc geninfo_all_blocks=1 00:22:02.120 --rc geninfo_unexecuted_blocks=1 00:22:02.120 00:22:02.120 ' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.120 --rc genhtml_branch_coverage=1 00:22:02.120 --rc genhtml_function_coverage=1 00:22:02.120 --rc genhtml_legend=1 00:22:02.120 --rc geninfo_all_blocks=1 00:22:02.120 --rc geninfo_unexecuted_blocks=1 00:22:02.120 00:22:02.120 ' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.120 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.121 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:02.121 Cannot find device "nvmf_init_br" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:02.121 Cannot find device "nvmf_init_br2" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:02.121 Cannot find device "nvmf_tgt_br" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.121 Cannot find device "nvmf_tgt_br2" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:02.121 Cannot find device "nvmf_init_br" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:02.121 Cannot find device "nvmf_init_br2" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:02.121 Cannot find device "nvmf_tgt_br" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:02.121 Cannot find device "nvmf_tgt_br2" 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:22:02.121 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:02.380 Cannot find device "nvmf_br" 00:22:02.380 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:22:02.380 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:02.381 Cannot find device "nvmf_init_if" 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:02.381 Cannot find device "nvmf_init_if2" 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.381 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:02.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:22:02.640 00:22:02.640 --- 10.0.0.3 ping statistics --- 00:22:02.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.640 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:02.640 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:02.640 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:22:02.640 00:22:02.640 --- 10.0.0.4 ping statistics --- 00:22:02.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.640 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:02.640 00:22:02.640 --- 10.0.0.1 ping statistics --- 00:22:02.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.640 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:02.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:02.640 00:22:02.640 --- 10.0.0.2 ping statistics --- 00:22:02.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.640 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=94348 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 94348 00:22:02.640 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94348 ']' 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.641 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.641 [2024-11-28 11:52:32.649876] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:02.641 [2024-11-28 11:52:32.650577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.900 [2024-11-28 11:52:32.779231] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.900 [2024-11-28 11:52:32.810816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.900 [2024-11-28 11:52:32.851880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.900 [2024-11-28 11:52:32.851951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.900 [2024-11-28 11:52:32.851966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.900 [2024-11-28 11:52:32.851977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.900 [2024-11-28 11:52:32.851986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.900 [2024-11-28 11:52:32.852459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.900 [2024-11-28 11:52:32.916716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:02.900 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.900 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:22:02.900 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.900 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.900 11:52:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.159 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.159 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:03.159 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.159 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.159 [2024-11-28 11:52:33.048803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.159 [2024-11-28 11:52:33.056976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:03.159 null0 00:22:03.159 [2024-11-28 11:52:33.088870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:03.159 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=94372 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 94372 /tmp/host.sock 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94372 ']' 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:03.160 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.160 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.160 [2024-11-28 11:52:33.174757] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:03.160 [2024-11-28 11:52:33.174889] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94372 ] 00:22:03.419 [2024-11-28 11:52:33.301226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.419 [2024-11-28 11:52:33.334229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.419 [2024-11-28 11:52:33.371595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.419 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.419 [2024-11-28 11:52:33.504603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.677 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.677 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:03.677 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.677 11:52:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 [2024-11-28 11:52:34.561939] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:04.613 [2024-11-28 11:52:34.561964] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:04.613 [2024-11-28 11:52:34.561980] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:04.613 [2024-11-28 11:52:34.567976] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:04.613 [2024-11-28 11:52:34.622274] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:04.613 [2024-11-28 11:52:34.623388] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10ff320:1 started. 00:22:04.613 [2024-11-28 11:52:34.625254] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:04.613 [2024-11-28 11:52:34.625349] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:04.613 [2024-11-28 11:52:34.625376] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:04.613 [2024-11-28 11:52:34.625393] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:04.613 [2024-11-28 11:52:34.625427] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.613 [2024-11-28 11:52:34.630514] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10ff320 was disconnected and freed. delete nvme_qpair. 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.872 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.872 11:52:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.805 11:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.736 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.994 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.994 11:52:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.926 11:52:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.860 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.118 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.118 11:52:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.056 11:52:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:10.056 11:52:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.056 [2024-11-28 11:52:40.052845] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:10.056 [2024-11-28 11:52:40.053052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.056 [2024-11-28 11:52:40.053070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.056 [2024-11-28 11:52:40.053082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.056 [2024-11-28 11:52:40.053090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.056 [2024-11-28 11:52:40.053099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.056 [2024-11-28 11:52:40.053107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.056 [2024-11-28 11:52:40.053117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.056 [2024-11-28 11:52:40.053125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.056 [2024-11-28 11:52:40.053135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.056 [2024-11-28 11:52:40.053143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.056 [2024-11-28 11:52:40.053151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dae50 is same with the state(6) to be set 00:22:10.056 [2024-11-28 11:52:40.062844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dae50 (9): Bad file descriptor 00:22:10.056 [2024-11-28 11:52:40.072863] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:10.056 [2024-11-28 11:52:40.072885] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:10.056 [2024-11-28 11:52:40.072891] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:10.056 [2024-11-28 11:52:40.072896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:10.056 [2024-11-28 11:52:40.072929] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.993 [2024-11-28 11:52:41.091402] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:22:10.993 [2024-11-28 11:52:41.091730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dae50 with addr=10.0.0.3, port=4420 00:22:10.993 [2024-11-28 11:52:41.092074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dae50 is same with the state(6) to be set 00:22:10.993 [2024-11-28 11:52:41.092403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dae50 (9): Bad file descriptor 00:22:10.993 [2024-11-28 11:52:41.093187] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:22:10.993 [2024-11-28 11:52:41.093268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:10.993 [2024-11-28 11:52:41.093319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:10.993 [2024-11-28 11:52:41.093343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:10.993 [2024-11-28 11:52:41.093362] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:10.993 [2024-11-28 11:52:41.093376] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:10.993 [2024-11-28 11:52:41.093387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:10.993 [2024-11-28 11:52:41.093406] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:10.993 [2024-11-28 11:52:41.093418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:10.993 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.252 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:11.252 11:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.190 [2024-11-28 11:52:42.093477] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:12.190 [2024-11-28 11:52:42.093518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:12.190 [2024-11-28 11:52:42.093546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:12.190 [2024-11-28 11:52:42.093558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:12.190 [2024-11-28 11:52:42.093569] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:22:12.190 [2024-11-28 11:52:42.093578] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:12.190 [2024-11-28 11:52:42.093586] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:12.190 [2024-11-28 11:52:42.093591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:12.190 [2024-11-28 11:52:42.093627] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:22:12.190 [2024-11-28 11:52:42.093670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.190 [2024-11-28 11:52:42.093686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.190 [2024-11-28 11:52:42.093701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.190 [2024-11-28 11:52:42.093710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.190 [2024-11-28 11:52:42.093720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.190 [2024-11-28 11:52:42.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.190 [2024-11-28 11:52:42.093739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.190 [2024-11-28 11:52:42.093748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.190 [2024-11-28 11:52:42.093758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.190 [2024-11-28 11:52:42.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.190 [2024-11-28 11:52:42.093776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:22:12.190 [2024-11-28 11:52:42.094316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c9390 (9): Bad file descriptor 00:22:12.190 [2024-11-28 11:52:42.095333] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:12.190 [2024-11-28 11:52:42.095358] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:12.190 11:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.569 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.570 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.570 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.570 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:13.570 11:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.137 [2024-11-28 11:52:44.106585] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:14.137 [2024-11-28 11:52:44.106752] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:14.137 [2024-11-28 11:52:44.106787] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:14.137 [2024-11-28 11:52:44.112627] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:22:14.137 [2024-11-28 11:52:44.166906] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:22:14.137 [2024-11-28 11:52:44.167895] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10b5a50:1 started. 00:22:14.137 [2024-11-28 11:52:44.169308] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:14.137 [2024-11-28 11:52:44.169492] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:14.137 [2024-11-28 11:52:44.169562] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:14.137 [2024-11-28 11:52:44.169677] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:22:14.137 [2024-11-28 11:52:44.169742] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:14.137 [2024-11-28 11:52:44.175419] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10b5a50 was disconnected and freed. delete nvme_qpair. 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 94372 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94372 ']' 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94372 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94372 00:22:14.396 killing process with pid 94372 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94372' 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94372 00:22:14.396 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94372 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.655 rmmod nvme_tcp 00:22:14.655 rmmod nvme_fabrics 00:22:14.655 rmmod nvme_keyring 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 94348 ']' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 94348 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94348 ']' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94348 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94348 00:22:14.655 killing process with pid 94348 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94348' 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94348 00:22:14.655 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94348 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:14.914 11:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:14.914 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:14.914 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:14.914 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:15.172 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:15.172 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:22:15.173 00:22:15.173 real 0m13.268s 00:22:15.173 user 0m22.269s 00:22:15.173 sys 0m2.670s 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.173 ************************************ 00:22:15.173 END TEST nvmf_discovery_remove_ifc 00:22:15.173 ************************************ 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.173 ************************************ 00:22:15.173 START TEST nvmf_identify_kernel_target 00:22:15.173 ************************************ 00:22:15.173 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:15.433 * Looking for test storage... 00:22:15.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.433 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.433 --rc genhtml_branch_coverage=1 00:22:15.434 --rc genhtml_function_coverage=1 00:22:15.434 --rc genhtml_legend=1 00:22:15.434 --rc geninfo_all_blocks=1 00:22:15.434 --rc geninfo_unexecuted_blocks=1 00:22:15.434 00:22:15.434 ' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.434 --rc genhtml_branch_coverage=1 00:22:15.434 --rc genhtml_function_coverage=1 00:22:15.434 --rc genhtml_legend=1 00:22:15.434 --rc geninfo_all_blocks=1 00:22:15.434 --rc geninfo_unexecuted_blocks=1 00:22:15.434 00:22:15.434 ' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.434 --rc genhtml_branch_coverage=1 00:22:15.434 --rc genhtml_function_coverage=1 00:22:15.434 --rc genhtml_legend=1 00:22:15.434 --rc geninfo_all_blocks=1 00:22:15.434 --rc geninfo_unexecuted_blocks=1 00:22:15.434 00:22:15.434 ' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.434 --rc genhtml_branch_coverage=1 00:22:15.434 --rc genhtml_function_coverage=1 00:22:15.434 --rc genhtml_legend=1 00:22:15.434 --rc geninfo_all_blocks=1 00:22:15.434 --rc geninfo_unexecuted_blocks=1 00:22:15.434 00:22:15.434 ' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:15.434 Cannot find device "nvmf_init_br" 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:22:15.434 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:15.435 Cannot find device "nvmf_init_br2" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:15.435 Cannot find device "nvmf_tgt_br" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.435 Cannot find device "nvmf_tgt_br2" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:15.435 Cannot find device "nvmf_init_br" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:15.435 Cannot find device "nvmf_init_br2" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:15.435 Cannot find device "nvmf_tgt_br" 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:22:15.435 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:15.694 Cannot find device "nvmf_tgt_br2" 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:15.694 Cannot find device "nvmf_br" 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:15.694 Cannot find device "nvmf_init_if" 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:15.694 Cannot find device "nvmf_init_if2" 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:15.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:15.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:15.694 00:22:15.694 --- 10.0.0.3 ping statistics --- 00:22:15.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.694 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:15.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:15.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:22:15.694 00:22:15.694 --- 10.0.0.4 ping statistics --- 00:22:15.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.694 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:15.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:15.694 00:22:15.694 --- 10.0.0.1 ping statistics --- 00:22:15.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.694 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:15.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:22:15.694 00:22:15.694 --- 10.0.0.2 ping statistics --- 00:22:15.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.694 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.694 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.695 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:15.954 11:52:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:16.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.213 Waiting for block devices as requested 00:22:16.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:16.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:16.471 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:16.471 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:16.471 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:16.471 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:16.472 No valid GPT data, bailing 00:22:16.472 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:16.731 No valid GPT data, bailing 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:16.731 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:16.732 No valid GPT data, bailing 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:16.732 No valid GPT data, bailing 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:16.732 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:16.991 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -a 10.0.0.1 -t tcp -s 4420 00:22:16.991 00:22:16.991 Discovery Log Number of Records 2, Generation counter 2 00:22:16.991 =====Discovery Log Entry 0====== 00:22:16.991 trtype: tcp 00:22:16.991 adrfam: ipv4 00:22:16.991 subtype: current discovery subsystem 00:22:16.991 treq: not specified, sq flow control disable supported 00:22:16.991 portid: 1 00:22:16.991 trsvcid: 4420 00:22:16.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:16.991 traddr: 10.0.0.1 00:22:16.991 eflags: none 00:22:16.991 sectype: none 00:22:16.991 =====Discovery Log Entry 1====== 00:22:16.991 trtype: tcp 00:22:16.991 adrfam: ipv4 00:22:16.991 subtype: nvme subsystem 00:22:16.991 treq: not specified, sq flow control disable supported 00:22:16.991 portid: 1 00:22:16.991 trsvcid: 4420 00:22:16.991 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:16.991 traddr: 10.0.0.1 00:22:16.991 eflags: none 00:22:16.991 sectype: none 00:22:16.991 11:52:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:16.991 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:16.991 ===================================================== 00:22:16.991 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:16.991 ===================================================== 00:22:16.991 Controller Capabilities/Features 00:22:16.991 ================================ 00:22:16.991 Vendor ID: 0000 00:22:16.991 Subsystem Vendor ID: 0000 00:22:16.992 Serial Number: d99be2558a9f185be4fc 00:22:16.992 Model Number: Linux 00:22:16.992 Firmware Version: 6.8.9-20 00:22:16.992 Recommended Arb Burst: 0 00:22:16.992 IEEE OUI Identifier: 00 00 00 00:22:16.992 Multi-path I/O 00:22:16.992 May have multiple subsystem ports: No 00:22:16.992 May have multiple controllers: No 00:22:16.992 Associated with SR-IOV VF: No 00:22:16.992 Max Data Transfer Size: Unlimited 00:22:16.992 Max Number of Namespaces: 0 00:22:16.992 Max Number of I/O Queues: 1024 00:22:16.992 NVMe Specification Version (VS): 1.3 00:22:16.992 NVMe Specification Version (Identify): 1.3 00:22:16.992 Maximum Queue Entries: 1024 00:22:16.992 Contiguous Queues Required: No 00:22:16.992 Arbitration Mechanisms Supported 00:22:16.992 Weighted Round Robin: Not Supported 00:22:16.992 Vendor Specific: Not Supported 00:22:16.992 Reset Timeout: 7500 ms 00:22:16.992 Doorbell Stride: 4 bytes 00:22:16.992 NVM Subsystem Reset: Not Supported 00:22:16.992 Command Sets Supported 00:22:16.992 NVM Command Set: Supported 00:22:16.992 Boot Partition: Not Supported 00:22:16.992 Memory Page Size Minimum: 4096 bytes 00:22:16.992 Memory Page Size Maximum: 4096 bytes 00:22:16.992 Persistent Memory Region: Not Supported 00:22:16.992 Optional Asynchronous Events Supported 00:22:16.992 Namespace Attribute Notices: Not Supported 00:22:16.992 Firmware Activation Notices: Not Supported 00:22:16.992 ANA Change Notices: Not Supported 00:22:16.992 PLE Aggregate Log Change Notices: Not Supported 00:22:16.992 LBA Status Info Alert Notices: Not Supported 00:22:16.992 EGE Aggregate Log Change Notices: Not Supported 00:22:16.992 Normal NVM Subsystem Shutdown event: Not Supported 00:22:16.992 Zone Descriptor Change Notices: Not Supported 00:22:16.992 Discovery Log Change Notices: Supported 00:22:16.992 Controller Attributes 00:22:16.992 128-bit Host Identifier: Not Supported 00:22:16.992 Non-Operational Permissive Mode: Not Supported 00:22:16.992 NVM Sets: Not Supported 00:22:16.992 Read Recovery Levels: Not Supported 00:22:16.992 Endurance Groups: Not Supported 00:22:16.992 Predictable Latency Mode: Not Supported 00:22:16.992 Traffic Based Keep ALive: Not Supported 00:22:16.992 Namespace Granularity: Not Supported 00:22:16.992 SQ Associations: Not Supported 00:22:16.992 UUID List: Not Supported 00:22:16.992 Multi-Domain Subsystem: Not Supported 00:22:16.992 Fixed Capacity Management: Not Supported 00:22:16.992 Variable Capacity Management: Not Supported 00:22:16.992 Delete Endurance Group: Not Supported 00:22:16.992 Delete NVM Set: Not Supported 00:22:16.992 Extended LBA Formats Supported: Not Supported 00:22:16.992 Flexible Data Placement Supported: Not Supported 00:22:16.992 00:22:16.992 Controller Memory Buffer Support 00:22:16.992 ================================ 00:22:16.992 Supported: No 00:22:16.992 00:22:16.992 Persistent Memory Region Support 00:22:16.992 ================================ 00:22:16.992 Supported: No 00:22:16.992 00:22:16.992 Admin Command Set Attributes 00:22:16.992 ============================ 00:22:16.992 Security Send/Receive: Not Supported 00:22:16.992 Format NVM: Not Supported 00:22:16.992 Firmware Activate/Download: Not Supported 00:22:16.992 Namespace Management: Not Supported 00:22:16.992 Device Self-Test: Not Supported 00:22:16.992 Directives: Not Supported 00:22:16.992 NVMe-MI: Not Supported 00:22:16.992 Virtualization Management: Not Supported 00:22:16.992 Doorbell Buffer Config: Not Supported 00:22:16.992 Get LBA Status Capability: Not Supported 00:22:16.992 Command & Feature Lockdown Capability: Not Supported 00:22:16.992 Abort Command Limit: 1 00:22:16.992 Async Event Request Limit: 1 00:22:16.992 Number of Firmware Slots: N/A 00:22:16.992 Firmware Slot 1 Read-Only: N/A 00:22:16.992 Firmware Activation Without Reset: N/A 00:22:16.992 Multiple Update Detection Support: N/A 00:22:16.992 Firmware Update Granularity: No Information Provided 00:22:16.992 Per-Namespace SMART Log: No 00:22:16.992 Asymmetric Namespace Access Log Page: Not Supported 00:22:16.992 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:16.992 Command Effects Log Page: Not Supported 00:22:16.992 Get Log Page Extended Data: Supported 00:22:16.992 Telemetry Log Pages: Not Supported 00:22:16.992 Persistent Event Log Pages: Not Supported 00:22:16.992 Supported Log Pages Log Page: May Support 00:22:16.992 Commands Supported & Effects Log Page: Not Supported 00:22:16.992 Feature Identifiers & Effects Log Page:May Support 00:22:16.992 NVMe-MI Commands & Effects Log Page: May Support 00:22:16.992 Data Area 4 for Telemetry Log: Not Supported 00:22:16.992 Error Log Page Entries Supported: 1 00:22:16.992 Keep Alive: Not Supported 00:22:16.992 00:22:16.992 NVM Command Set Attributes 00:22:16.992 ========================== 00:22:16.992 Submission Queue Entry Size 00:22:16.992 Max: 1 00:22:16.992 Min: 1 00:22:16.992 Completion Queue Entry Size 00:22:16.992 Max: 1 00:22:16.992 Min: 1 00:22:16.992 Number of Namespaces: 0 00:22:16.992 Compare Command: Not Supported 00:22:16.992 Write Uncorrectable Command: Not Supported 00:22:16.992 Dataset Management Command: Not Supported 00:22:16.992 Write Zeroes Command: Not Supported 00:22:16.992 Set Features Save Field: Not Supported 00:22:16.992 Reservations: Not Supported 00:22:16.992 Timestamp: Not Supported 00:22:16.992 Copy: Not Supported 00:22:16.992 Volatile Write Cache: Not Present 00:22:16.992 Atomic Write Unit (Normal): 1 00:22:16.992 Atomic Write Unit (PFail): 1 00:22:16.992 Atomic Compare & Write Unit: 1 00:22:16.992 Fused Compare & Write: Not Supported 00:22:16.992 Scatter-Gather List 00:22:16.992 SGL Command Set: Supported 00:22:16.992 SGL Keyed: Not Supported 00:22:16.992 SGL Bit Bucket Descriptor: Not Supported 00:22:16.992 SGL Metadata Pointer: Not Supported 00:22:16.992 Oversized SGL: Not Supported 00:22:16.992 SGL Metadata Address: Not Supported 00:22:16.992 SGL Offset: Supported 00:22:16.992 Transport SGL Data Block: Not Supported 00:22:16.992 Replay Protected Memory Block: Not Supported 00:22:16.992 00:22:16.992 Firmware Slot Information 00:22:16.992 ========================= 00:22:16.992 Active slot: 0 00:22:16.992 00:22:16.992 00:22:16.992 Error Log 00:22:16.992 ========= 00:22:16.992 00:22:16.992 Active Namespaces 00:22:16.992 ================= 00:22:16.992 Discovery Log Page 00:22:16.992 ================== 00:22:16.992 Generation Counter: 2 00:22:16.992 Number of Records: 2 00:22:16.992 Record Format: 0 00:22:16.992 00:22:16.992 Discovery Log Entry 0 00:22:16.992 ---------------------- 00:22:16.992 Transport Type: 3 (TCP) 00:22:16.992 Address Family: 1 (IPv4) 00:22:16.992 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:16.992 Entry Flags: 00:22:16.992 Duplicate Returned Information: 0 00:22:16.992 Explicit Persistent Connection Support for Discovery: 0 00:22:16.992 Transport Requirements: 00:22:16.992 Secure Channel: Not Specified 00:22:16.992 Port ID: 1 (0x0001) 00:22:16.992 Controller ID: 65535 (0xffff) 00:22:16.992 Admin Max SQ Size: 32 00:22:16.992 Transport Service Identifier: 4420 00:22:16.992 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:16.992 Transport Address: 10.0.0.1 00:22:16.992 Discovery Log Entry 1 00:22:16.992 ---------------------- 00:22:16.992 Transport Type: 3 (TCP) 00:22:16.992 Address Family: 1 (IPv4) 00:22:16.992 Subsystem Type: 2 (NVM Subsystem) 00:22:16.992 Entry Flags: 00:22:16.992 Duplicate Returned Information: 0 00:22:16.992 Explicit Persistent Connection Support for Discovery: 0 00:22:16.992 Transport Requirements: 00:22:16.992 Secure Channel: Not Specified 00:22:16.992 Port ID: 1 (0x0001) 00:22:16.992 Controller ID: 65535 (0xffff) 00:22:16.992 Admin Max SQ Size: 32 00:22:16.992 Transport Service Identifier: 4420 00:22:16.992 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:16.992 Transport Address: 10.0.0.1 00:22:16.992 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:17.254 get_feature(0x01) failed 00:22:17.254 get_feature(0x02) failed 00:22:17.254 get_feature(0x04) failed 00:22:17.254 ===================================================== 00:22:17.254 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:17.254 ===================================================== 00:22:17.254 Controller Capabilities/Features 00:22:17.254 ================================ 00:22:17.254 Vendor ID: 0000 00:22:17.254 Subsystem Vendor ID: 0000 00:22:17.254 Serial Number: 3bebb8076457af6ba6ee 00:22:17.254 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:17.254 Firmware Version: 6.8.9-20 00:22:17.254 Recommended Arb Burst: 6 00:22:17.254 IEEE OUI Identifier: 00 00 00 00:22:17.254 Multi-path I/O 00:22:17.254 May have multiple subsystem ports: Yes 00:22:17.254 May have multiple controllers: Yes 00:22:17.254 Associated with SR-IOV VF: No 00:22:17.254 Max Data Transfer Size: Unlimited 00:22:17.254 Max Number of Namespaces: 1024 00:22:17.254 Max Number of I/O Queues: 128 00:22:17.254 NVMe Specification Version (VS): 1.3 00:22:17.254 NVMe Specification Version (Identify): 1.3 00:22:17.254 Maximum Queue Entries: 1024 00:22:17.254 Contiguous Queues Required: No 00:22:17.254 Arbitration Mechanisms Supported 00:22:17.254 Weighted Round Robin: Not Supported 00:22:17.254 Vendor Specific: Not Supported 00:22:17.254 Reset Timeout: 7500 ms 00:22:17.254 Doorbell Stride: 4 bytes 00:22:17.254 NVM Subsystem Reset: Not Supported 00:22:17.254 Command Sets Supported 00:22:17.254 NVM Command Set: Supported 00:22:17.254 Boot Partition: Not Supported 00:22:17.254 Memory Page Size Minimum: 4096 bytes 00:22:17.254 Memory Page Size Maximum: 4096 bytes 00:22:17.254 Persistent Memory Region: Not Supported 00:22:17.254 Optional Asynchronous Events Supported 00:22:17.254 Namespace Attribute Notices: Supported 00:22:17.254 Firmware Activation Notices: Not Supported 00:22:17.254 ANA Change Notices: Supported 00:22:17.254 PLE Aggregate Log Change Notices: Not Supported 00:22:17.254 LBA Status Info Alert Notices: Not Supported 00:22:17.254 EGE Aggregate Log Change Notices: Not Supported 00:22:17.254 Normal NVM Subsystem Shutdown event: Not Supported 00:22:17.254 Zone Descriptor Change Notices: Not Supported 00:22:17.254 Discovery Log Change Notices: Not Supported 00:22:17.254 Controller Attributes 00:22:17.254 128-bit Host Identifier: Supported 00:22:17.254 Non-Operational Permissive Mode: Not Supported 00:22:17.254 NVM Sets: Not Supported 00:22:17.254 Read Recovery Levels: Not Supported 00:22:17.254 Endurance Groups: Not Supported 00:22:17.254 Predictable Latency Mode: Not Supported 00:22:17.254 Traffic Based Keep ALive: Supported 00:22:17.254 Namespace Granularity: Not Supported 00:22:17.254 SQ Associations: Not Supported 00:22:17.254 UUID List: Not Supported 00:22:17.254 Multi-Domain Subsystem: Not Supported 00:22:17.254 Fixed Capacity Management: Not Supported 00:22:17.254 Variable Capacity Management: Not Supported 00:22:17.254 Delete Endurance Group: Not Supported 00:22:17.254 Delete NVM Set: Not Supported 00:22:17.254 Extended LBA Formats Supported: Not Supported 00:22:17.254 Flexible Data Placement Supported: Not Supported 00:22:17.254 00:22:17.254 Controller Memory Buffer Support 00:22:17.254 ================================ 00:22:17.254 Supported: No 00:22:17.254 00:22:17.254 Persistent Memory Region Support 00:22:17.254 ================================ 00:22:17.254 Supported: No 00:22:17.254 00:22:17.254 Admin Command Set Attributes 00:22:17.254 ============================ 00:22:17.254 Security Send/Receive: Not Supported 00:22:17.254 Format NVM: Not Supported 00:22:17.254 Firmware Activate/Download: Not Supported 00:22:17.254 Namespace Management: Not Supported 00:22:17.254 Device Self-Test: Not Supported 00:22:17.254 Directives: Not Supported 00:22:17.254 NVMe-MI: Not Supported 00:22:17.254 Virtualization Management: Not Supported 00:22:17.254 Doorbell Buffer Config: Not Supported 00:22:17.254 Get LBA Status Capability: Not Supported 00:22:17.254 Command & Feature Lockdown Capability: Not Supported 00:22:17.254 Abort Command Limit: 4 00:22:17.254 Async Event Request Limit: 4 00:22:17.254 Number of Firmware Slots: N/A 00:22:17.254 Firmware Slot 1 Read-Only: N/A 00:22:17.254 Firmware Activation Without Reset: N/A 00:22:17.254 Multiple Update Detection Support: N/A 00:22:17.254 Firmware Update Granularity: No Information Provided 00:22:17.254 Per-Namespace SMART Log: Yes 00:22:17.254 Asymmetric Namespace Access Log Page: Supported 00:22:17.254 ANA Transition Time : 10 sec 00:22:17.254 00:22:17.254 Asymmetric Namespace Access Capabilities 00:22:17.254 ANA Optimized State : Supported 00:22:17.254 ANA Non-Optimized State : Supported 00:22:17.254 ANA Inaccessible State : Supported 00:22:17.254 ANA Persistent Loss State : Supported 00:22:17.254 ANA Change State : Supported 00:22:17.254 ANAGRPID is not changed : No 00:22:17.254 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:17.254 00:22:17.254 ANA Group Identifier Maximum : 128 00:22:17.254 Number of ANA Group Identifiers : 128 00:22:17.254 Max Number of Allowed Namespaces : 1024 00:22:17.254 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:17.254 Command Effects Log Page: Supported 00:22:17.254 Get Log Page Extended Data: Supported 00:22:17.254 Telemetry Log Pages: Not Supported 00:22:17.254 Persistent Event Log Pages: Not Supported 00:22:17.254 Supported Log Pages Log Page: May Support 00:22:17.254 Commands Supported & Effects Log Page: Not Supported 00:22:17.254 Feature Identifiers & Effects Log Page:May Support 00:22:17.254 NVMe-MI Commands & Effects Log Page: May Support 00:22:17.254 Data Area 4 for Telemetry Log: Not Supported 00:22:17.254 Error Log Page Entries Supported: 128 00:22:17.254 Keep Alive: Supported 00:22:17.254 Keep Alive Granularity: 1000 ms 00:22:17.254 00:22:17.254 NVM Command Set Attributes 00:22:17.254 ========================== 00:22:17.254 Submission Queue Entry Size 00:22:17.254 Max: 64 00:22:17.254 Min: 64 00:22:17.254 Completion Queue Entry Size 00:22:17.254 Max: 16 00:22:17.254 Min: 16 00:22:17.254 Number of Namespaces: 1024 00:22:17.254 Compare Command: Not Supported 00:22:17.254 Write Uncorrectable Command: Not Supported 00:22:17.254 Dataset Management Command: Supported 00:22:17.254 Write Zeroes Command: Supported 00:22:17.254 Set Features Save Field: Not Supported 00:22:17.254 Reservations: Not Supported 00:22:17.254 Timestamp: Not Supported 00:22:17.254 Copy: Not Supported 00:22:17.254 Volatile Write Cache: Present 00:22:17.254 Atomic Write Unit (Normal): 1 00:22:17.254 Atomic Write Unit (PFail): 1 00:22:17.254 Atomic Compare & Write Unit: 1 00:22:17.254 Fused Compare & Write: Not Supported 00:22:17.254 Scatter-Gather List 00:22:17.254 SGL Command Set: Supported 00:22:17.254 SGL Keyed: Not Supported 00:22:17.254 SGL Bit Bucket Descriptor: Not Supported 00:22:17.254 SGL Metadata Pointer: Not Supported 00:22:17.254 Oversized SGL: Not Supported 00:22:17.254 SGL Metadata Address: Not Supported 00:22:17.254 SGL Offset: Supported 00:22:17.254 Transport SGL Data Block: Not Supported 00:22:17.254 Replay Protected Memory Block: Not Supported 00:22:17.254 00:22:17.254 Firmware Slot Information 00:22:17.255 ========================= 00:22:17.255 Active slot: 0 00:22:17.255 00:22:17.255 Asymmetric Namespace Access 00:22:17.255 =========================== 00:22:17.255 Change Count : 0 00:22:17.255 Number of ANA Group Descriptors : 1 00:22:17.255 ANA Group Descriptor : 0 00:22:17.255 ANA Group ID : 1 00:22:17.255 Number of NSID Values : 1 00:22:17.255 Change Count : 0 00:22:17.255 ANA State : 1 00:22:17.255 Namespace Identifier : 1 00:22:17.255 00:22:17.255 Commands Supported and Effects 00:22:17.255 ============================== 00:22:17.255 Admin Commands 00:22:17.255 -------------- 00:22:17.255 Get Log Page (02h): Supported 00:22:17.255 Identify (06h): Supported 00:22:17.255 Abort (08h): Supported 00:22:17.255 Set Features (09h): Supported 00:22:17.255 Get Features (0Ah): Supported 00:22:17.255 Asynchronous Event Request (0Ch): Supported 00:22:17.255 Keep Alive (18h): Supported 00:22:17.255 I/O Commands 00:22:17.255 ------------ 00:22:17.255 Flush (00h): Supported 00:22:17.255 Write (01h): Supported LBA-Change 00:22:17.255 Read (02h): Supported 00:22:17.255 Write Zeroes (08h): Supported LBA-Change 00:22:17.255 Dataset Management (09h): Supported 00:22:17.255 00:22:17.255 Error Log 00:22:17.255 ========= 00:22:17.255 Entry: 0 00:22:17.255 Error Count: 0x3 00:22:17.255 Submission Queue Id: 0x0 00:22:17.255 Command Id: 0x5 00:22:17.255 Phase Bit: 0 00:22:17.255 Status Code: 0x2 00:22:17.255 Status Code Type: 0x0 00:22:17.255 Do Not Retry: 1 00:22:17.255 Error Location: 0x28 00:22:17.255 LBA: 0x0 00:22:17.255 Namespace: 0x0 00:22:17.255 Vendor Log Page: 0x0 00:22:17.255 ----------- 00:22:17.255 Entry: 1 00:22:17.255 Error Count: 0x2 00:22:17.255 Submission Queue Id: 0x0 00:22:17.255 Command Id: 0x5 00:22:17.255 Phase Bit: 0 00:22:17.255 Status Code: 0x2 00:22:17.255 Status Code Type: 0x0 00:22:17.255 Do Not Retry: 1 00:22:17.255 Error Location: 0x28 00:22:17.255 LBA: 0x0 00:22:17.255 Namespace: 0x0 00:22:17.255 Vendor Log Page: 0x0 00:22:17.255 ----------- 00:22:17.255 Entry: 2 00:22:17.255 Error Count: 0x1 00:22:17.255 Submission Queue Id: 0x0 00:22:17.255 Command Id: 0x4 00:22:17.255 Phase Bit: 0 00:22:17.255 Status Code: 0x2 00:22:17.255 Status Code Type: 0x0 00:22:17.255 Do Not Retry: 1 00:22:17.255 Error Location: 0x28 00:22:17.255 LBA: 0x0 00:22:17.255 Namespace: 0x0 00:22:17.255 Vendor Log Page: 0x0 00:22:17.255 00:22:17.255 Number of Queues 00:22:17.255 ================ 00:22:17.255 Number of I/O Submission Queues: 128 00:22:17.255 Number of I/O Completion Queues: 128 00:22:17.255 00:22:17.255 ZNS Specific Controller Data 00:22:17.255 ============================ 00:22:17.255 Zone Append Size Limit: 0 00:22:17.255 00:22:17.255 00:22:17.255 Active Namespaces 00:22:17.255 ================= 00:22:17.255 get_feature(0x05) failed 00:22:17.255 Namespace ID:1 00:22:17.255 Command Set Identifier: NVM (00h) 00:22:17.255 Deallocate: Supported 00:22:17.255 Deallocated/Unwritten Error: Not Supported 00:22:17.255 Deallocated Read Value: Unknown 00:22:17.255 Deallocate in Write Zeroes: Not Supported 00:22:17.255 Deallocated Guard Field: 0xFFFF 00:22:17.255 Flush: Supported 00:22:17.255 Reservation: Not Supported 00:22:17.255 Namespace Sharing Capabilities: Multiple Controllers 00:22:17.255 Size (in LBAs): 1310720 (5GiB) 00:22:17.255 Capacity (in LBAs): 1310720 (5GiB) 00:22:17.255 Utilization (in LBAs): 1310720 (5GiB) 00:22:17.255 UUID: 7472e4bf-eef1-47f0-b96f-797f9fff2309 00:22:17.255 Thin Provisioning: Not Supported 00:22:17.255 Per-NS Atomic Units: Yes 00:22:17.255 Atomic Boundary Size (Normal): 0 00:22:17.255 Atomic Boundary Size (PFail): 0 00:22:17.255 Atomic Boundary Offset: 0 00:22:17.255 NGUID/EUI64 Never Reused: No 00:22:17.255 ANA group ID: 1 00:22:17.255 Namespace Write Protected: No 00:22:17.255 Number of LBA Formats: 1 00:22:17.255 Current LBA Format: LBA Format #00 00:22:17.255 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:22:17.255 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.255 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.255 rmmod nvme_tcp 00:22:17.535 rmmod nvme_fabrics 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.535 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:17.813 11:52:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:18.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:18.639 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:18.640 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:18.640 00:22:18.640 real 0m3.399s 00:22:18.640 user 0m1.211s 00:22:18.640 sys 0m1.595s 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.640 ************************************ 00:22:18.640 END TEST nvmf_identify_kernel_target 00:22:18.640 ************************************ 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.640 ************************************ 00:22:18.640 START TEST nvmf_auth_host 00:22:18.640 ************************************ 00:22:18.640 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.900 * Looking for test storage... 00:22:18.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:18.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.900 --rc genhtml_branch_coverage=1 00:22:18.900 --rc genhtml_function_coverage=1 00:22:18.900 --rc genhtml_legend=1 00:22:18.900 --rc geninfo_all_blocks=1 00:22:18.900 --rc geninfo_unexecuted_blocks=1 00:22:18.900 00:22:18.900 ' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:18.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.900 --rc genhtml_branch_coverage=1 00:22:18.900 --rc genhtml_function_coverage=1 00:22:18.900 --rc genhtml_legend=1 00:22:18.900 --rc geninfo_all_blocks=1 00:22:18.900 --rc geninfo_unexecuted_blocks=1 00:22:18.900 00:22:18.900 ' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:18.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.900 --rc genhtml_branch_coverage=1 00:22:18.900 --rc genhtml_function_coverage=1 00:22:18.900 --rc genhtml_legend=1 00:22:18.900 --rc geninfo_all_blocks=1 00:22:18.900 --rc geninfo_unexecuted_blocks=1 00:22:18.900 00:22:18.900 ' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:18.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.900 --rc genhtml_branch_coverage=1 00:22:18.900 --rc genhtml_function_coverage=1 00:22:18.900 --rc genhtml_legend=1 00:22:18.900 --rc geninfo_all_blocks=1 00:22:18.900 --rc geninfo_unexecuted_blocks=1 00:22:18.900 00:22:18.900 ' 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.900 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.901 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:18.901 Cannot find device "nvmf_init_br" 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:18.901 Cannot find device "nvmf_init_br2" 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:18.901 Cannot find device "nvmf_tgt_br" 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.901 Cannot find device "nvmf_tgt_br2" 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:18.901 Cannot find device "nvmf_init_br" 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:22:18.901 11:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:18.901 Cannot find device "nvmf_init_br2" 00:22:18.901 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:22:18.901 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:18.901 Cannot find device "nvmf_tgt_br" 00:22:18.901 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:22:18.901 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:19.160 Cannot find device "nvmf_tgt_br2" 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:19.160 Cannot find device "nvmf_br" 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:19.160 Cannot find device "nvmf_init_if" 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:19.160 Cannot find device "nvmf_init_if2" 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.160 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:19.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:22:19.419 00:22:19.419 --- 10.0.0.3 ping statistics --- 00:22:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.419 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:19.419 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:19.419 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:22:19.419 00:22:19.419 --- 10.0.0.4 ping statistics --- 00:22:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.419 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:19.419 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:19.419 00:22:19.419 --- 10.0.0.1 ping statistics --- 00:22:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.419 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:19.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:22:19.420 00:22:19.420 --- 10.0.0.2 ping statistics --- 00:22:19.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.420 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=95360 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 95360 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 95360 ']' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.420 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=755bd059fda412332d693f377d5a5da9 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KBM 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 755bd059fda412332d693f377d5a5da9 0 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 755bd059fda412332d693f377d5a5da9 0 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=755bd059fda412332d693f377d5a5da9 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:19.686 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KBM 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KBM 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.KBM 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=04284bd20697c590eb487491b13c53ecb9dd18940f0e55469073e5fa41175180 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.07W 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 04284bd20697c590eb487491b13c53ecb9dd18940f0e55469073e5fa41175180 3 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 04284bd20697c590eb487491b13c53ecb9dd18940f0e55469073e5fa41175180 3 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=04284bd20697c590eb487491b13c53ecb9dd18940f0e55469073e5fa41175180 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.07W 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.07W 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.07W 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a82f492104db098aef442409f8a7025450b5b2b66218aa6 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tnJ 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a82f492104db098aef442409f8a7025450b5b2b66218aa6 0 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a82f492104db098aef442409f8a7025450b5b2b66218aa6 0 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a82f492104db098aef442409f8a7025450b5b2b66218aa6 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tnJ 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tnJ 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.tnJ 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bb303bf1641b32cdef547954e12d2c20bc12fdc0e8c84ea1 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DCr 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bb303bf1641b32cdef547954e12d2c20bc12fdc0e8c84ea1 2 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bb303bf1641b32cdef547954e12d2c20bc12fdc0e8c84ea1 2 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bb303bf1641b32cdef547954e12d2c20bc12fdc0e8c84ea1 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:19.945 11:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DCr 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DCr 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DCr 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4c8cfb3e9a9290c324ca24b3fe298f9 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.glx 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4c8cfb3e9a9290c324ca24b3fe298f9 1 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4c8cfb3e9a9290c324ca24b3fe298f9 1 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4c8cfb3e9a9290c324ca24b3fe298f9 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:19.945 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.glx 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.glx 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.glx 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1556a5c0794a5feda26186871c7a4358 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8Xm 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1556a5c0794a5feda26186871c7a4358 1 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1556a5c0794a5feda26186871c7a4358 1 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1556a5c0794a5feda26186871c7a4358 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8Xm 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8Xm 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8Xm 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=125e4f10b3c75b8df6c2975f16748a3c1182ffa5c73135ff 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uYf 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 125e4f10b3c75b8df6c2975f16748a3c1182ffa5c73135ff 2 00:22:20.204 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 125e4f10b3c75b8df6c2975f16748a3c1182ffa5c73135ff 2 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=125e4f10b3c75b8df6c2975f16748a3c1182ffa5c73135ff 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uYf 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uYf 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uYf 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=730caaf6fe9fc57f8b0e840ecc664ee4 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.B9j 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 730caaf6fe9fc57f8b0e840ecc664ee4 0 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 730caaf6fe9fc57f8b0e840ecc664ee4 0 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=730caaf6fe9fc57f8b0e840ecc664ee4 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.B9j 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.B9j 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.B9j 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dce2bc6b1300859448fe4038d7649ae1f691883cc6ad641d113dcf1951de0278 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hRE 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dce2bc6b1300859448fe4038d7649ae1f691883cc6ad641d113dcf1951de0278 3 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dce2bc6b1300859448fe4038d7649ae1f691883cc6ad641d113dcf1951de0278 3 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dce2bc6b1300859448fe4038d7649ae1f691883cc6ad641d113dcf1951de0278 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:20.205 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.463 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hRE 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hRE 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hRE 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 95360 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 95360 ']' 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.464 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KBM 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.07W ]] 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.07W 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.tnJ 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.722 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DCr ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DCr 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.glx 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8Xm ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8Xm 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uYf 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.B9j ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.B9j 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hRE 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:20.723 11:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:21.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:21.289 Waiting for block devices as requested 00:22:21.289 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.289 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:21.856 No valid GPT data, bailing 00:22:21.856 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:22.115 11:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:22.115 No valid GPT data, bailing 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.115 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:22.116 No valid GPT data, bailing 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:22.116 No valid GPT data, bailing 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:22:22.116 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -a 10.0.0.1 -t tcp -s 4420 00:22:22.375 00:22:22.375 Discovery Log Number of Records 2, Generation counter 2 00:22:22.375 =====Discovery Log Entry 0====== 00:22:22.375 trtype: tcp 00:22:22.375 adrfam: ipv4 00:22:22.375 subtype: current discovery subsystem 00:22:22.375 treq: not specified, sq flow control disable supported 00:22:22.375 portid: 1 00:22:22.375 trsvcid: 4420 00:22:22.375 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:22.375 traddr: 10.0.0.1 00:22:22.375 eflags: none 00:22:22.375 sectype: none 00:22:22.375 =====Discovery Log Entry 1====== 00:22:22.375 trtype: tcp 00:22:22.375 adrfam: ipv4 00:22:22.375 subtype: nvme subsystem 00:22:22.375 treq: not specified, sq flow control disable supported 00:22:22.375 portid: 1 00:22:22.375 trsvcid: 4420 00:22:22.375 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:22.375 traddr: 10.0.0.1 00:22:22.375 eflags: none 00:22:22.375 sectype: none 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.375 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 nvme0n1 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 nvme0n1 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.634 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 nvme0n1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.894 11:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 nvme0n1 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.153 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.154 nvme0n1 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.154 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.412 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.413 nvme0n1 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.413 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.672 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 nvme0n1 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:23.930 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.931 11:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.931 nvme0n1 00:22:23.931 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.931 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.931 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.931 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.931 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.190 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.191 nvme0n1 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.191 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.450 nvme0n1 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:24.450 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.451 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.710 nvme0n1 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.710 11:52:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.277 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.536 nvme0n1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.536 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.795 nvme0n1 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.795 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.796 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.055 nvme0n1 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.055 11:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.055 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.314 nvme0n1 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.314 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.574 nvme0n1 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.574 11:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.949 11:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.208 nvme0n1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.208 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.776 nvme0n1 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.776 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.035 nvme0n1 00:22:29.035 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.035 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.035 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.035 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.035 11:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.035 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.036 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.295 nvme0n1 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.295 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.554 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 nvme0n1 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.814 11:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.382 nvme0n1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.382 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.949 nvme0n1 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.949 11:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 nvme0n1 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.517 11:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.085 nvme0n1 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.085 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.652 nvme0n1 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.652 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.653 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.912 nvme0n1 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.912 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.913 nvme0n1 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.913 11:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.913 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.913 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.913 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.913 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 nvme0n1 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.174 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.433 nvme0n1 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.433 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.434 nvme0n1 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.434 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 nvme0n1 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.693 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.694 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.952 nvme0n1 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:33.952 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.953 11:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.211 nvme0n1 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:34.211 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.212 nvme0n1 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.212 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.470 nvme0n1 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.470 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.471 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.730 nvme0n1 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.730 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.731 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.989 nvme0n1 00:22:34.990 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.990 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.990 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.990 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.990 11:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.990 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.248 nvme0n1 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.248 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 nvme0n1 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.506 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.507 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 nvme0n1 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.766 11:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.024 nvme0n1 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.024 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.283 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 nvme0n1 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.543 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.803 nvme0n1 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.803 11:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.370 nvme0n1 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.370 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.629 nvme0n1 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.630 11:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.196 nvme0n1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.196 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.762 nvme0n1 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:38.762 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.763 11:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.330 nvme0n1 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.330 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.589 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.590 11:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.156 nvme0n1 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.156 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.157 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.723 nvme0n1 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.723 nvme0n1 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.723 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.724 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.982 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.983 nvme0n1 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.983 11:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.983 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 nvme0n1 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 nvme0n1 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.242 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 nvme0n1 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.502 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.503 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.761 nvme0n1 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:41.761 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.762 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.020 nvme0n1 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.020 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.021 11:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.021 nvme0n1 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.021 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 nvme0n1 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.280 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 nvme0n1 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.541 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.542 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.820 nvme0n1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.820 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.103 nvme0n1 00:22:43.103 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.103 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.103 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.103 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.103 11:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.103 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.374 nvme0n1 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.374 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.633 nvme0n1 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:43.633 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.634 nvme0n1 00:22:43.634 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.893 11:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.152 nvme0n1 00:22:44.152 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.152 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.152 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.152 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.153 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.412 nvme0n1 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.412 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.671 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.930 nvme0n1 00:22:44.930 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.930 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.930 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.930 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.931 11:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.190 nvme0n1 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.190 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.449 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.709 nvme0n1 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU1YmQwNTlmZGE0MTIzMzJkNjkzZjM3N2Q1YTVkYTlGSJpb: 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDQyODRiZDIwNjk3YzU5MGViNDg3NDkxYjEzYzUzZWNiOWRkMTg5NDBmMGU1NTQ2OTA3M2U1ZmE0MTE3NTE4MNwSXOU=: 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.709 11:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 nvme0n1 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.277 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.278 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 nvme0n1 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.846 11:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.414 nvme0n1 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.414 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1ZTRmMTBiM2M3NWI4ZGY2YzI5NzVmMTY3NDhhM2MxMTgyZmZhNWM3MzEzNWZmmoyemg==: 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2FhZjZmZTlmYzU3ZjhiMGU4NDBlY2M2NjRlZTRrRaMi: 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.415 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 nvme0n1 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 11:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGNlMmJjNmIxMzAwODU5NDQ4ZmU0MDM4ZDc2NDlhZTFmNjkxODgzY2M2YWQ2NDFkMTEzZGNmMTk1MWRlMDI3OPV7tT0=: 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.984 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 nvme0n1 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 request: 00:22:48.553 { 00:22:48.553 "name": "nvme0", 00:22:48.553 "trtype": "tcp", 00:22:48.553 "traddr": "10.0.0.1", 00:22:48.553 "adrfam": "ipv4", 00:22:48.553 "trsvcid": "4420", 00:22:48.553 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.553 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.553 "prchk_reftag": false, 00:22:48.553 "prchk_guard": false, 00:22:48.553 "hdgst": false, 00:22:48.553 "ddgst": false, 00:22:48.553 "allow_unrecognized_csi": false, 00:22:48.553 "method": "bdev_nvme_attach_controller", 00:22:48.553 "req_id": 1 00:22:48.553 } 00:22:48.553 Got JSON-RPC error response 00:22:48.553 response: 00:22:48.553 { 00:22:48.553 "code": -5, 00:22:48.553 "message": "Input/output error" 00:22:48.553 } 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.813 request: 00:22:48.813 { 00:22:48.813 "name": "nvme0", 00:22:48.813 "trtype": "tcp", 00:22:48.813 "traddr": "10.0.0.1", 00:22:48.813 "adrfam": "ipv4", 00:22:48.813 "trsvcid": "4420", 00:22:48.813 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.813 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.813 "prchk_reftag": false, 00:22:48.813 "prchk_guard": false, 00:22:48.813 "hdgst": false, 00:22:48.813 "ddgst": false, 00:22:48.813 "dhchap_key": "key2", 00:22:48.813 "allow_unrecognized_csi": false, 00:22:48.813 "method": "bdev_nvme_attach_controller", 00:22:48.813 "req_id": 1 00:22:48.813 } 00:22:48.813 Got JSON-RPC error response 00:22:48.813 response: 00:22:48.813 { 00:22:48.813 "code": -5, 00:22:48.813 "message": "Input/output error" 00:22:48.813 } 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.813 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.814 request: 00:22:48.814 { 00:22:48.814 "name": "nvme0", 00:22:48.814 "trtype": "tcp", 00:22:48.814 "traddr": "10.0.0.1", 00:22:48.814 "adrfam": "ipv4", 00:22:48.814 "trsvcid": "4420", 00:22:48.814 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.814 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.814 "prchk_reftag": false, 00:22:48.814 "prchk_guard": false, 00:22:48.814 "hdgst": false, 00:22:48.814 "ddgst": false, 00:22:48.814 "dhchap_key": "key1", 00:22:48.814 "dhchap_ctrlr_key": "ckey2", 00:22:48.814 "allow_unrecognized_csi": false, 00:22:48.814 "method": "bdev_nvme_attach_controller", 00:22:48.814 "req_id": 1 00:22:48.814 } 00:22:48.814 Got JSON-RPC error response 00:22:48.814 response: 00:22:48.814 { 00:22:48.814 "code": -5, 00:22:48.814 "message": "Input/output error" 00:22:48.814 } 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.814 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.073 nvme0n1 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.073 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.074 11:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.074 request: 00:22:49.074 { 00:22:49.074 "name": "nvme0", 00:22:49.074 "dhchap_key": "key1", 00:22:49.074 "dhchap_ctrlr_key": "ckey2", 00:22:49.074 "method": "bdev_nvme_set_keys", 00:22:49.074 "req_id": 1 00:22:49.074 } 00:22:49.074 Got JSON-RPC error response 00:22:49.074 response: 00:22:49.074 { 00:22:49.074 "code": -13, 00:22:49.074 "message": "Permission denied" 00:22:49.074 } 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:49.074 11:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:50.011 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.011 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:50.011 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.011 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.011 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWE4MmY0OTIxMDRkYjA5OGFlZjQ0MjQwOWY4YTcwMjU0NTBiNWIyYjY2MjE4YWE25utBXg==: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmIzMDNiZjE2NDFiMzJjZGVmNTQ3OTU0ZTEyZDJjMjBiYzEyZmRjMGU4Yzg0ZWExGGWPVA==: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.270 nvme0n1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjOGNmYjNlOWE5MjkwYzMyNGNhMjRiM2ZlMjk4Zjl7KAxS: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU1NmE1YzA3OTRhNWZlZGEyNjE4Njg3MWM3YTQzNThceavh: 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.270 request: 00:22:50.270 { 00:22:50.270 "name": "nvme0", 00:22:50.270 "dhchap_key": "key2", 00:22:50.270 "dhchap_ctrlr_key": "ckey1", 00:22:50.270 "method": "bdev_nvme_set_keys", 00:22:50.270 "req_id": 1 00:22:50.270 } 00:22:50.270 Got JSON-RPC error response 00:22:50.270 response: 00:22:50.270 { 00:22:50.270 "code": -13, 00:22:50.270 "message": "Permission denied" 00:22:50.270 } 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:50.270 11:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.648 rmmod nvme_tcp 00:22:51.648 rmmod nvme_fabrics 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 95360 ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 95360 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 95360 ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 95360 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95360 00:22:51.648 killing process with pid 95360 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95360' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 95360 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 95360 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.648 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.908 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.908 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.908 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:51.909 11:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:51.909 11:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:52.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:52.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:52.846 11:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.KBM /tmp/spdk.key-null.tnJ /tmp/spdk.key-sha256.glx /tmp/spdk.key-sha384.uYf /tmp/spdk.key-sha512.hRE /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:52.846 11:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:53.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:53.415 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:53.415 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:53.415 ************************************ 00:22:53.415 END TEST nvmf_auth_host 00:22:53.415 ************************************ 00:22:53.415 00:22:53.415 real 0m34.712s 00:22:53.415 user 0m32.080s 00:22:53.415 sys 0m3.979s 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.415 ************************************ 00:22:53.415 START TEST nvmf_digest 00:22:53.415 ************************************ 00:22:53.415 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:53.676 * Looking for test storage... 00:22:53.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.676 --rc genhtml_branch_coverage=1 00:22:53.676 --rc genhtml_function_coverage=1 00:22:53.676 --rc genhtml_legend=1 00:22:53.676 --rc geninfo_all_blocks=1 00:22:53.676 --rc geninfo_unexecuted_blocks=1 00:22:53.676 00:22:53.676 ' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.676 --rc genhtml_branch_coverage=1 00:22:53.676 --rc genhtml_function_coverage=1 00:22:53.676 --rc genhtml_legend=1 00:22:53.676 --rc geninfo_all_blocks=1 00:22:53.676 --rc geninfo_unexecuted_blocks=1 00:22:53.676 00:22:53.676 ' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.676 --rc genhtml_branch_coverage=1 00:22:53.676 --rc genhtml_function_coverage=1 00:22:53.676 --rc genhtml_legend=1 00:22:53.676 --rc geninfo_all_blocks=1 00:22:53.676 --rc geninfo_unexecuted_blocks=1 00:22:53.676 00:22:53.676 ' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.676 --rc genhtml_branch_coverage=1 00:22:53.676 --rc genhtml_function_coverage=1 00:22:53.676 --rc genhtml_legend=1 00:22:53.676 --rc geninfo_all_blocks=1 00:22:53.676 --rc geninfo_unexecuted_blocks=1 00:22:53.676 00:22:53.676 ' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.676 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:53.677 Cannot find device "nvmf_init_br" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:53.677 Cannot find device "nvmf_init_br2" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:53.677 Cannot find device "nvmf_tgt_br" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.677 Cannot find device "nvmf_tgt_br2" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:53.677 Cannot find device "nvmf_init_br" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:53.677 Cannot find device "nvmf_init_br2" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:53.677 Cannot find device "nvmf_tgt_br" 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:53.677 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:53.936 Cannot find device "nvmf_tgt_br2" 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:53.936 Cannot find device "nvmf_br" 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:53.936 Cannot find device "nvmf_init_if" 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:53.936 Cannot find device "nvmf_init_if2" 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.936 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:53.937 11:53:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:53.937 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:54.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:22:54.196 00:22:54.196 --- 10.0.0.3 ping statistics --- 00:22:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.196 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:54.196 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:54.196 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:22:54.196 00:22:54.196 --- 10.0.0.4 ping statistics --- 00:22:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.196 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:22:54.196 00:22:54.196 --- 10.0.0.1 ping statistics --- 00:22:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.196 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:54.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:54.196 00:22:54.196 --- 10.0.0.2 ping statistics --- 00:22:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.196 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 ************************************ 00:22:54.196 START TEST nvmf_digest_clean 00:22:54.196 ************************************ 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=96982 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 96982 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96982 ']' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.196 11:53:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 [2024-11-28 11:53:24.181269] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:54.196 [2024-11-28 11:53:24.181866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.196 [2024-11-28 11:53:24.310608] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:54.455 [2024-11-28 11:53:24.342983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.455 [2024-11-28 11:53:24.385449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.455 [2024-11-28 11:53:24.385543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.455 [2024-11-28 11:53:24.385559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.455 [2024-11-28 11:53:24.385570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.455 [2024-11-28 11:53:24.385579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.455 [2024-11-28 11:53:24.386055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.391 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.391 [2024-11-28 11:53:25.307602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:55.391 null0 00:22:55.391 [2024-11-28 11:53:25.367139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.391 [2024-11-28 11:53:25.391331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97014 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97014 /var/tmp/bperf.sock 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97014 ']' 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.392 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.392 [2024-11-28 11:53:25.460852] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:55.392 [2024-11-28 11:53:25.461140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97014 ] 00:22:55.651 [2024-11-28 11:53:25.588214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.651 [2024-11-28 11:53:25.612136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.651 [2024-11-28 11:53:25.643318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.651 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.651 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:55.651 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:55.651 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:55.651 11:53:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:55.911 [2024-11-28 11:53:26.006875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.171 11:53:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.171 11:53:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.430 nvme0n1 00:22:56.430 11:53:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:56.430 11:53:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:56.430 Running I/O for 2 seconds... 00:22:58.746 18288.00 IOPS, 71.44 MiB/s [2024-11-28T11:53:28.872Z] 18478.50 IOPS, 72.18 MiB/s 00:22:58.746 Latency(us) 00:22:58.746 [2024-11-28T11:53:28.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.746 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:58.746 nvme0n1 : 2.01 18516.63 72.33 0.00 0.00 6907.74 6523.81 21567.30 00:22:58.746 [2024-11-28T11:53:28.872Z] =================================================================================================================== 00:22:58.746 [2024-11-28T11:53:28.872Z] Total : 18516.63 72.33 0.00 0.00 6907.74 6523.81 21567.30 00:22:58.746 { 00:22:58.746 "results": [ 00:22:58.746 { 00:22:58.746 "job": "nvme0n1", 00:22:58.746 "core_mask": "0x2", 00:22:58.746 "workload": "randread", 00:22:58.746 "status": "finished", 00:22:58.746 "queue_depth": 128, 00:22:58.746 "io_size": 4096, 00:22:58.746 "runtime": 2.009653, 00:22:58.746 "iops": 18516.62948777724, 00:22:58.746 "mibps": 72.33058393662985, 00:22:58.746 "io_failed": 0, 00:22:58.746 "io_timeout": 0, 00:22:58.746 "avg_latency_us": 6907.737310935866, 00:22:58.746 "min_latency_us": 6523.810909090909, 00:22:58.746 "max_latency_us": 21567.30181818182 00:22:58.746 } 00:22:58.746 ], 00:22:58.746 "core_count": 1 00:22:58.746 } 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:58.746 | select(.opcode=="crc32c") 00:22:58.746 | "\(.module_name) \(.executed)"' 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97014 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97014 ']' 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97014 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97014 00:22:58.746 killing process with pid 97014 00:22:58.746 Received shutdown signal, test time was about 2.000000 seconds 00:22:58.746 00:22:58.746 Latency(us) 00:22:58.746 [2024-11-28T11:53:28.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.746 [2024-11-28T11:53:28.872Z] =================================================================================================================== 00:22:58.746 [2024-11-28T11:53:28.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97014' 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97014 00:22:58.746 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97014 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97068 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97068 /var/tmp/bperf.sock 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97068 ']' 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.006 11:53:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:59.006 [2024-11-28 11:53:28.988530] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:59.006 [2024-11-28 11:53:28.988857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.006 Zero copy mechanism will not be used. 00:22:59.006 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97068 ] 00:22:59.006 [2024-11-28 11:53:29.106829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.265 [2024-11-28 11:53:29.130425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.265 [2024-11-28 11:53:29.161775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.265 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.265 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:59.266 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:59.266 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:59.266 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:59.524 [2024-11-28 11:53:29.494394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:59.524 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.524 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.781 nvme0n1 00:22:59.781 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:59.781 11:53:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:00.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:00.037 Zero copy mechanism will not be used. 00:23:00.037 Running I/O for 2 seconds... 00:23:01.903 7952.00 IOPS, 994.00 MiB/s [2024-11-28T11:53:32.029Z] 7936.00 IOPS, 992.00 MiB/s 00:23:01.903 Latency(us) 00:23:01.903 [2024-11-28T11:53:32.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:01.903 nvme0n1 : 2.00 7935.84 991.98 0.00 0.00 2013.45 1906.50 4081.11 00:23:01.903 [2024-11-28T11:53:32.029Z] =================================================================================================================== 00:23:01.903 [2024-11-28T11:53:32.029Z] Total : 7935.84 991.98 0.00 0.00 2013.45 1906.50 4081.11 00:23:01.903 { 00:23:01.903 "results": [ 00:23:01.903 { 00:23:01.903 "job": "nvme0n1", 00:23:01.903 "core_mask": "0x2", 00:23:01.903 "workload": "randread", 00:23:01.903 "status": "finished", 00:23:01.903 "queue_depth": 16, 00:23:01.903 "io_size": 131072, 00:23:01.903 "runtime": 2.002057, 00:23:01.903 "iops": 7935.837990626641, 00:23:01.903 "mibps": 991.9797488283301, 00:23:01.903 "io_failed": 0, 00:23:01.903 "io_timeout": 0, 00:23:01.903 "avg_latency_us": 2013.4485544264396, 00:23:01.903 "min_latency_us": 1906.5018181818182, 00:23:01.903 "max_latency_us": 4081.1054545454544 00:23:01.903 } 00:23:01.903 ], 00:23:01.903 "core_count": 1 00:23:01.903 } 00:23:01.903 11:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:01.903 11:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:01.903 11:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:01.903 11:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:01.903 11:53:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:01.903 | select(.opcode=="crc32c") 00:23:01.903 | "\(.module_name) \(.executed)"' 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97068 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97068 ']' 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97068 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97068 00:23:02.161 killing process with pid 97068 00:23:02.161 Received shutdown signal, test time was about 2.000000 seconds 00:23:02.161 00:23:02.161 Latency(us) 00:23:02.161 [2024-11-28T11:53:32.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.161 [2024-11-28T11:53:32.287Z] =================================================================================================================== 00:23:02.161 [2024-11-28T11:53:32.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97068' 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97068 00:23:02.161 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97068 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97115 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97115 /var/tmp/bperf.sock 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97115 ']' 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.419 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.420 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:02.420 [2024-11-28 11:53:32.461536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:02.420 [2024-11-28 11:53:32.461775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97115 ] 00:23:02.678 [2024-11-28 11:53:32.579874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.678 [2024-11-28 11:53:32.602941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.678 [2024-11-28 11:53:32.633753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.678 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.678 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:23:02.678 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:02.678 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:02.678 11:53:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:02.937 [2024-11-28 11:53:33.016346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:03.196 11:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.196 11:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.455 nvme0n1 00:23:03.455 11:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:03.455 11:53:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:03.455 Running I/O for 2 seconds... 00:23:05.769 19813.00 IOPS, 77.39 MiB/s [2024-11-28T11:53:35.895Z] 19876.00 IOPS, 77.64 MiB/s 00:23:05.769 Latency(us) 00:23:05.769 [2024-11-28T11:53:35.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.769 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:05.769 nvme0n1 : 2.00 19917.98 77.80 0.00 0.00 6420.82 4319.42 12690.15 00:23:05.769 [2024-11-28T11:53:35.895Z] =================================================================================================================== 00:23:05.769 [2024-11-28T11:53:35.895Z] Total : 19917.98 77.80 0.00 0.00 6420.82 4319.42 12690.15 00:23:05.769 { 00:23:05.769 "results": [ 00:23:05.769 { 00:23:05.769 "job": "nvme0n1", 00:23:05.769 "core_mask": "0x2", 00:23:05.769 "workload": "randwrite", 00:23:05.769 "status": "finished", 00:23:05.769 "queue_depth": 128, 00:23:05.769 "io_size": 4096, 00:23:05.769 "runtime": 2.002211, 00:23:05.769 "iops": 19917.9806723667, 00:23:05.769 "mibps": 77.80461200143242, 00:23:05.769 "io_failed": 0, 00:23:05.769 "io_timeout": 0, 00:23:05.769 "avg_latency_us": 6420.820430746785, 00:23:05.769 "min_latency_us": 4319.418181818181, 00:23:05.769 "max_latency_us": 12690.152727272727 00:23:05.769 } 00:23:05.769 ], 00:23:05.769 "core_count": 1 00:23:05.769 } 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:05.769 | select(.opcode=="crc32c") 00:23:05.769 | "\(.module_name) \(.executed)"' 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97115 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97115 ']' 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97115 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.769 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97115 00:23:05.769 killing process with pid 97115 00:23:05.769 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.769 00:23:05.769 Latency(us) 00:23:05.769 [2024-11-28T11:53:35.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.769 [2024-11-28T11:53:35.895Z] =================================================================================================================== 00:23:05.769 [2024-11-28T11:53:35.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.770 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.770 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.770 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97115' 00:23:05.770 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97115 00:23:05.770 11:53:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97115 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97163 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97163 /var/tmp/bperf.sock 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97163 ']' 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:06.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.029 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:06.029 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:06.029 Zero copy mechanism will not be used. 00:23:06.029 [2024-11-28 11:53:36.080080] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:06.029 [2024-11-28 11:53:36.080170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97163 ] 00:23:06.288 [2024-11-28 11:53:36.205362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:06.288 [2024-11-28 11:53:36.228827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.288 [2024-11-28 11:53:36.260453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.288 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.288 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:23:06.288 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:06.288 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:06.288 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:06.547 [2024-11-28 11:53:36.580444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:06.547 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.548 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.806 nvme0n1 00:23:06.807 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:06.807 11:53:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:07.065 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:07.065 Zero copy mechanism will not be used. 00:23:07.065 Running I/O for 2 seconds... 00:23:08.940 6699.00 IOPS, 837.38 MiB/s [2024-11-28T11:53:39.066Z] 6715.50 IOPS, 839.44 MiB/s 00:23:08.940 Latency(us) 00:23:08.940 [2024-11-28T11:53:39.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.940 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:08.940 nvme0n1 : 2.00 6713.29 839.16 0.00 0.00 2378.77 1846.92 6047.19 00:23:08.940 [2024-11-28T11:53:39.066Z] =================================================================================================================== 00:23:08.940 [2024-11-28T11:53:39.066Z] Total : 6713.29 839.16 0.00 0.00 2378.77 1846.92 6047.19 00:23:08.940 { 00:23:08.940 "results": [ 00:23:08.940 { 00:23:08.940 "job": "nvme0n1", 00:23:08.940 "core_mask": "0x2", 00:23:08.940 "workload": "randwrite", 00:23:08.940 "status": "finished", 00:23:08.940 "queue_depth": 16, 00:23:08.940 "io_size": 131072, 00:23:08.940 "runtime": 2.003042, 00:23:08.940 "iops": 6713.28908729822, 00:23:08.940 "mibps": 839.1611359122775, 00:23:08.940 "io_failed": 0, 00:23:08.940 "io_timeout": 0, 00:23:08.940 "avg_latency_us": 2378.771999972958, 00:23:08.940 "min_latency_us": 1846.9236363636364, 00:23:08.940 "max_latency_us": 6047.185454545454 00:23:08.940 } 00:23:08.940 ], 00:23:08.940 "core_count": 1 00:23:08.940 } 00:23:09.199 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:09.199 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:09.199 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:09.199 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:09.199 | select(.opcode=="crc32c") 00:23:09.199 | "\(.module_name) \(.executed)"' 00:23:09.199 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97163 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97163 ']' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97163 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97163 00:23:09.458 killing process with pid 97163 00:23:09.458 Received shutdown signal, test time was about 2.000000 seconds 00:23:09.458 00:23:09.458 Latency(us) 00:23:09.458 [2024-11-28T11:53:39.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.458 [2024-11-28T11:53:39.584Z] =================================================================================================================== 00:23:09.458 [2024-11-28T11:53:39.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97163' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97163 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97163 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 96982 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96982 ']' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96982 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.458 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96982 00:23:09.717 killing process with pid 96982 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96982' 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96982 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96982 00:23:09.717 ************************************ 00:23:09.717 END TEST nvmf_digest_clean 00:23:09.717 ************************************ 00:23:09.717 00:23:09.717 real 0m15.705s 00:23:09.717 user 0m28.764s 00:23:09.717 sys 0m5.311s 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.717 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:09.976 ************************************ 00:23:09.976 START TEST nvmf_digest_error 00:23:09.976 ************************************ 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:23:09.976 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:09.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=97239 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 97239 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97239 ']' 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.977 11:53:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:09.977 [2024-11-28 11:53:39.947906] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:09.977 [2024-11-28 11:53:39.948006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.977 [2024-11-28 11:53:40.075215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:09.977 [2024-11-28 11:53:40.095466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.237 [2024-11-28 11:53:40.143908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.237 [2024-11-28 11:53:40.143954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.237 [2024-11-28 11:53:40.143965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.237 [2024-11-28 11:53:40.143971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.237 [2024-11-28 11:53:40.143978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.237 [2024-11-28 11:53:40.144367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.818 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.818 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:10.818 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.818 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.818 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.110 [2024-11-28 11:53:40.944847] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.110 11:53:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.110 [2024-11-28 11:53:41.005159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:11.110 null0 00:23:11.110 [2024-11-28 11:53:41.053878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.110 [2024-11-28 11:53:41.078010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97271 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97271 /var/tmp/bperf.sock 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97271 ']' 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:11.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.110 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.110 [2024-11-28 11:53:41.133107] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:11.110 [2024-11-28 11:53:41.133354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97271 ] 00:23:11.407 [2024-11-28 11:53:41.253943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:11.407 [2024-11-28 11:53:41.273675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.407 [2024-11-28 11:53:41.306371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.407 [2024-11-28 11:53:41.359142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:11.407 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.407 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:11.407 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:11.407 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.670 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.929 nvme0n1 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:11.929 11:53:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:11.929 Running I/O for 2 seconds... 00:23:12.188 [2024-11-28 11:53:42.076035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.076080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.076110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.090348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.090384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.090411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.104461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.104662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.104679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.118974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.133429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.133617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.148293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.148340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.148367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.163011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.163200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.163216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.179854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.180032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.180048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.195638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.195674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.195717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.210000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.210047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.210058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.224286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.224328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.224355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-11-28 11:53:42.238563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.188 [2024-11-28 11:53:42.238736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-11-28 11:53:42.238752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.189 [2024-11-28 11:53:42.252922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.189 [2024-11-28 11:53:42.252956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-11-28 11:53:42.252984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.189 [2024-11-28 11:53:42.267187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.189 [2024-11-28 11:53:42.267220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-11-28 11:53:42.267247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.189 [2024-11-28 11:53:42.281412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.189 [2024-11-28 11:53:42.281446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-11-28 11:53:42.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.189 [2024-11-28 11:53:42.295582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.189 [2024-11-28 11:53:42.295616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-11-28 11:53:42.295643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.189 [2024-11-28 11:53:42.309939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.189 [2024-11-28 11:53:42.309973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-11-28 11:53:42.310000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.447 [2024-11-28 11:53:42.325077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.447 [2024-11-28 11:53:42.325110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.447 [2024-11-28 11:53:42.325137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.447 [2024-11-28 11:53:42.339609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.447 [2024-11-28 11:53:42.339641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.447 [2024-11-28 11:53:42.339667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.447 [2024-11-28 11:53:42.353814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.447 [2024-11-28 11:53:42.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.447 [2024-11-28 11:53:42.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.447 [2024-11-28 11:53:42.368203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.368262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.382598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.382756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.396932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.396967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.396993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.411339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.411400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.411428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.425502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.425534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.425561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.439755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.439787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.439814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.453906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.453939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.453967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.470272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.470313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.470341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.486155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.486191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.486217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.501485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.501519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.501546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.516736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.516769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.516795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.531204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.531238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.531265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.545770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.545817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.545829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.448 [2024-11-28 11:53:42.560142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.448 [2024-11-28 11:53:42.560175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.448 [2024-11-28 11:53:42.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.575983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.576184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.576200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.590893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.591110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.591127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.605551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.605584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.605596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.620118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.620179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.634665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.634699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.634726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.649331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.649363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.649390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.664091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.664125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.664152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.679511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.679542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.707 [2024-11-28 11:53:42.679569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.707 [2024-11-28 11:53:42.693782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.707 [2024-11-28 11:53:42.693814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.693840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.708127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.708159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.708186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.722405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.722437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.722464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.736840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.737021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.737036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.751409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.751442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.751469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.765976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.766008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.766035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.780223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.780256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.780283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.794729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.794950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.809355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.809390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.809417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.708 [2024-11-28 11:53:42.823708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.708 [2024-11-28 11:53:42.823741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.708 [2024-11-28 11:53:42.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.969 [2024-11-28 11:53:42.839753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.969 [2024-11-28 11:53:42.839785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-11-28 11:53:42.839812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.969 [2024-11-28 11:53:42.854137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.969 [2024-11-28 11:53:42.854173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-11-28 11:53:42.854199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.868570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.868604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.868632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.883064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.883245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.883261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.897578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.897624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.897652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.911889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.911922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.911949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.926181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.926215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.940617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.940817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.940834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.955247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.955439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.955454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.969697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.969731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.969758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:42.984370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:42.984403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:42.984430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:43.004798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.004833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.004860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:43.019156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.019352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.019369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:43.033578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.033612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 17079.00 IOPS, 66.71 MiB/s [2024-11-28T11:53:43.096Z] [2024-11-28 11:53:43.049020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.049049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.049075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:43.063485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.063519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.063546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.970 [2024-11-28 11:53:43.077890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:12.970 [2024-11-28 11:53:43.077923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-11-28 11:53:43.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.229 [2024-11-28 11:53:43.092892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.093073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.093089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.108184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.108218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.108245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.123098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.123279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.137576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.137610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.137637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.152140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.152173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.152201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.166725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.166760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.180972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.181140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.196936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.196971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.196999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.213414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.213611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.229010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.229191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.229206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.243426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.243456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.243483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.257628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.257661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.257688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.272470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.272504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.272531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.287458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.287492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.287519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.302076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.302109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.302135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.316373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.316406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.330463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.330656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.330673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.230 [2024-11-28 11:53:43.344989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.230 [2024-11-28 11:53:43.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.230 [2024-11-28 11:53:43.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.360601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.360634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.374958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.375137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.375152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.389267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.389326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.389339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.403658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.403691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.403718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.417905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.417949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.417976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.432124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.432158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.432184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.446233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.446266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.446293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.460506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.460704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.475041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.475218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.475233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.489384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.503688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.503748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.517883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.517916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.517942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.532088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.532148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.546257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.490 [2024-11-28 11:53:43.546290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.490 [2024-11-28 11:53:43.546346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.490 [2024-11-28 11:53:43.560542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.491 [2024-11-28 11:53:43.560574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.491 [2024-11-28 11:53:43.560601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.491 [2024-11-28 11:53:43.574822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.491 [2024-11-28 11:53:43.575018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.491 [2024-11-28 11:53:43.575033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.491 [2024-11-28 11:53:43.589127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.491 [2024-11-28 11:53:43.589161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.491 [2024-11-28 11:53:43.589189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.491 [2024-11-28 11:53:43.603401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.491 [2024-11-28 11:53:43.603433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.491 [2024-11-28 11:53:43.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.618968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.619001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.619028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.633433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.633467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.647776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.647808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.647835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.661941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.661975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.662002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.676808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.676843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.676869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.692954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.750 [2024-11-28 11:53:43.692988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.750 [2024-11-28 11:53:43.693015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.750 [2024-11-28 11:53:43.708018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.708051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.708077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.722748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.722952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.722969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.737885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.738064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.738081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.752628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.752822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.767511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.767545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.767572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.782016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.782050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.782077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.796940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.796972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.796999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.811339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.811385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.811412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.825946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.825974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.825985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.840978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.841015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.856002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.856035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.856062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.751 [2024-11-28 11:53:43.870526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:13.751 [2024-11-28 11:53:43.870721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.751 [2024-11-28 11:53:43.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.886126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.886160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.886187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.900545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.900728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.900744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.915085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.915264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.929664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.929842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.929858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.951047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.951229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.951246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.965837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.966016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.966032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.980399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.980579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.980595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:43.994845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:43.995040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:43.995056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:44.009322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:44.009356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:44.009383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:44.023618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:44.023653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:44.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 11:53:44.037881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:44.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:44.037943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 17205.00 IOPS, 67.21 MiB/s [2024-11-28T11:53:44.137Z] [2024-11-28 11:53:44.053701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaed4b0) 00:23:14.011 [2024-11-28 11:53:44.053734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.011 [2024-11-28 11:53:44.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.011 00:23:14.011 Latency(us) 00:23:14.011 [2024-11-28T11:53:44.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.011 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:14.011 nvme0n1 : 2.01 17193.55 67.16 0.00 0.00 7439.16 6791.91 27644.28 00:23:14.011 [2024-11-28T11:53:44.137Z] =================================================================================================================== 00:23:14.011 [2024-11-28T11:53:44.137Z] Total : 17193.55 67.16 0.00 0.00 7439.16 6791.91 27644.28 00:23:14.011 { 00:23:14.011 "results": [ 00:23:14.011 { 00:23:14.012 "job": "nvme0n1", 00:23:14.012 "core_mask": "0x2", 00:23:14.012 "workload": "randread", 00:23:14.012 "status": "finished", 00:23:14.012 "queue_depth": 128, 00:23:14.012 "io_size": 4096, 00:23:14.012 "runtime": 2.008776, 00:23:14.012 "iops": 17193.554682055143, 00:23:14.012 "mibps": 67.1623229767779, 00:23:14.012 "io_failed": 0, 00:23:14.012 "io_timeout": 0, 00:23:14.012 "avg_latency_us": 7439.1552905626995, 00:23:14.012 "min_latency_us": 6791.912727272727, 00:23:14.012 "max_latency_us": 27644.276363636363 00:23:14.012 } 00:23:14.012 ], 00:23:14.012 "core_count": 1 00:23:14.012 } 00:23:14.012 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:14.012 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:14.012 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:14.012 | .driver_specific 00:23:14.012 | .nvme_error 00:23:14.012 | .status_code 00:23:14.012 | .command_transient_transport_error' 00:23:14.012 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97271 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97271 ']' 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97271 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.271 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97271 00:23:14.530 killing process with pid 97271 00:23:14.530 Received shutdown signal, test time was about 2.000000 seconds 00:23:14.530 00:23:14.530 Latency(us) 00:23:14.530 [2024-11-28T11:53:44.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.530 [2024-11-28T11:53:44.656Z] =================================================================================================================== 00:23:14.530 [2024-11-28T11:53:44.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97271' 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97271 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97271 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97324 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97324 /var/tmp/bperf.sock 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97324 ']' 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.530 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:14.530 [2024-11-28 11:53:44.608716] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:14.530 [2024-11-28 11:53:44.608940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97324 ] 00:23:14.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:14.530 Zero copy mechanism will not be used. 00:23:14.789 [2024-11-28 11:53:44.727012] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:14.789 [2024-11-28 11:53:44.750870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.789 [2024-11-28 11:53:44.781900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.789 [2024-11-28 11:53:44.833316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:14.789 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.790 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:14.790 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:14.790 11:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.049 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.310 nvme0n1 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:15.310 11:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:15.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:15.571 Zero copy mechanism will not be used. 00:23:15.571 Running I/O for 2 seconds... 00:23:15.571 [2024-11-28 11:53:45.500652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.571 [2024-11-28 11:53:45.500743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.571 [2024-11-28 11:53:45.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.571 [2024-11-28 11:53:45.505185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.505220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.505232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.509515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.509560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.513863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.513907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.518132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.518166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.518179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.522539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.522574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.522600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.526878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.526911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.531216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.531251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.531262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.535514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.535547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.535573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.539867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.539901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.539913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.544148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.544181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.544192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.548443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.548475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.548502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.552728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.552762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.552773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.557044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.557077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.557088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.561395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.561427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.561438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.565687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.565721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.565732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.569984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.570028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.574198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.574231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.578496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.578545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.578571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.582862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.582895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.582936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.587209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.587243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.587254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.591541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.591573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.591584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.595795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.595827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.595838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.600099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.600133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.600144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.604458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.604490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.604517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.608817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.608850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.608861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.613143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.613176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.613187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.572 [2024-11-28 11:53:45.617453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.572 [2024-11-28 11:53:45.617486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.572 [2024-11-28 11:53:45.617498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.621775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.621807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.621818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.626031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.626063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.626074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.630339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.630372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.630382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.634622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.634656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.638937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.638970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.638981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.643268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.643312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.643340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.647540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.647572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.647583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.651762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.651806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.656024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.656057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.656068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.660322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.660355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.660381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.664561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.664594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.664620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.668970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.669003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.669015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.673260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.673307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.673337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.677488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.677521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.677532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.681672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.681706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.681716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.685860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.685893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.573 [2024-11-28 11:53:45.690220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.573 [2024-11-28 11:53:45.690254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.573 [2024-11-28 11:53:45.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.834 [2024-11-28 11:53:45.695025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.834 [2024-11-28 11:53:45.695058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.834 [2024-11-28 11:53:45.695085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.834 [2024-11-28 11:53:45.699336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.834 [2024-11-28 11:53:45.699381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.834 [2024-11-28 11:53:45.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.834 [2024-11-28 11:53:45.703818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.834 [2024-11-28 11:53:45.703851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.834 [2024-11-28 11:53:45.703862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.834 [2024-11-28 11:53:45.708104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.834 [2024-11-28 11:53:45.708136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.834 [2024-11-28 11:53:45.708147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.834 [2024-11-28 11:53:45.712377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.834 [2024-11-28 11:53:45.712410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.834 [2024-11-28 11:53:45.712436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.716698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.716855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.716871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.721208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.721244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.721255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.725480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.725513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.725524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.729747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.729790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.734042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.734074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.734085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.738357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.738391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.738401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.742603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.742638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.742649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.746958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.746991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.747003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.751267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.751340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.755551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.755582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.755593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.759770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.759803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.759814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.764025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.764058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.764069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.768274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.768334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.768361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.772553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.772587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.776898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.776931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.776942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.781189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.781222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.785471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.785503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.789690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.789724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.789734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.793966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.793999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.794010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.798238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.798270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.798281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.802572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.802607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.802619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.806925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.806964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.806977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.811343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.811402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.815686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.815718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.819850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.819882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.819893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.824070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.824103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.824115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.835 [2024-11-28 11:53:45.828288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.835 [2024-11-28 11:53:45.828329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.835 [2024-11-28 11:53:45.828356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.832493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.832526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.832552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.836854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.836888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.841180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.841213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.841225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.845495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.845529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.845539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.849736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.849769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.849780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.854009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.854042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.854053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.858266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.858310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.858339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.862572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.862607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.866969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.871311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.875571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.875604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.875614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.879829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.879862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.879873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.884067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.884100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.884111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.888281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.888341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.888368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.892582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.892776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.892792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.897110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.897145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.897157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.901461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.901494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.901505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.905656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.905689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.905699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.909943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.909977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.909988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.914202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.914235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.914246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.918451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.918506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.918519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.922754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.922789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.922830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.927145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.927178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.927189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.931495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.931527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.931538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.935807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.935840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.935866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.940086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.940119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.940129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.836 [2024-11-28 11:53:45.944284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.836 [2024-11-28 11:53:45.944325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.836 [2024-11-28 11:53:45.944352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.837 [2024-11-28 11:53:45.948511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.837 [2024-11-28 11:53:45.948543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.837 [2024-11-28 11:53:45.948569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.837 [2024-11-28 11:53:45.952935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:15.837 [2024-11-28 11:53:45.952970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.837 [2024-11-28 11:53:45.952996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.957715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.957750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.957777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.962046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.962107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.966498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.966548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.966574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.970931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.970963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.970990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.975243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.975275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.975286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.979552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.979585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.983801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.097 [2024-11-28 11:53:45.983834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.097 [2024-11-28 11:53:45.983845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.097 [2024-11-28 11:53:45.988043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:45.988075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:45.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:45.992265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:45.992323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:45.992353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:45.996531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:45.996563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:45.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.000887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.000920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.000931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.005155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.005188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.005200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.009429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.009461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.009472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.013747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.013790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.018027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.018059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.018070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.022260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.022306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.022335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.026534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.026567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.026593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.030888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.030920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.030931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.035146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.035178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.035189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.039548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.039581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.039608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.043823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.043856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.043867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.048043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.048076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.048087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.052308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.052341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.052366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.056745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.056927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.056943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.061302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.061365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.061393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.065761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.065795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.070386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.070420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.070447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.074998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.075049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.075062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.079707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.079864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.079880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.084252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.088818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.088852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.093212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.093245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.093258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.097664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.097697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.097709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.101949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.098 [2024-11-28 11:53:46.101982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.098 [2024-11-28 11:53:46.101993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.098 [2024-11-28 11:53:46.106216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.106251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.106263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.110729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.110964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.110980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.115285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.115331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.115360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.119624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.119657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.119668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.124048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.124081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.128286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.128328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.128340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.132536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.132569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.132579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.136873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.136906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.136918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.141396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.141429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.141455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.145766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.145800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.145810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.150029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.150062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.150073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.154385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.158662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.158699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.158726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.163026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.163059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.163070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.167599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.167632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.171881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.171914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.171925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.176134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.176168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.176180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.180584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.180617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.180628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.184936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.184969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.189154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.189187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.189198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.193513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.193558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.193585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.197820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.197853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.197864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.202078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.202111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.202122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.206423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.206455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.206466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.210938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.211087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.211103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.099 [2024-11-28 11:53:46.215432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.099 [2024-11-28 11:53:46.215465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.099 [2024-11-28 11:53:46.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.219994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.220030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.220057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.224555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.224590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.224617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.229091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.229124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.233526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.233559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.233585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.237840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.237873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.237884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.242093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.242126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.242137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.246348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.246380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.246390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.250773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.250823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.250849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.255183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.255216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.255242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.259721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.259754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.259780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.264179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.264212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.264239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.269160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.269350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.273998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.274032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.278569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.278607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.278619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.283099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.283143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.287779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.287813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.287823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.292150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.292182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.360 [2024-11-28 11:53:46.292193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.360 [2024-11-28 11:53:46.296582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.360 [2024-11-28 11:53:46.296614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.296626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.300877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.300912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.300924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.305137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.305172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.305183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.309479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.309509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.309520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.313757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.313790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.313801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.317966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.317999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.318010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.322182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.322215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.322226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.326424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.326457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.326468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.330762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.330812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.330824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.335115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.335149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.335175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.339522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.339554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.339580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.343835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.343868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.343879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.348075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.348107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.348118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.352326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.352359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.352369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.356591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.356623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.356634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.360904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.360937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.360948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.365146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.365179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.365190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.369426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.369458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.369485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.373903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.373936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.373947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.378415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.378447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.378458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.382662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.382696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.382723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.386945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.386977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.386988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.391239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.391283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.395493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.395536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.399723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.399755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.399765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.403968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.404001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.404012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.408174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.408206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.361 [2024-11-28 11:53:46.408217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.361 [2024-11-28 11:53:46.412475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.361 [2024-11-28 11:53:46.412508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.412518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.416759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.416792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.416803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.421157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.421190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.421200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.425433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.425465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.425491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.429618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.429650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.429676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.434026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.434062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.434074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.438211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.438244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.438255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.442449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.442488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.442514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.446659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.446694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.446705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.450997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.451030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.451041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.455214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.455247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.455259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.459500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.459532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.459542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.463690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.463723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.463734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.467925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.467957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.467967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.472148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.472182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.472193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.476499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.476532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.476542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.362 [2024-11-28 11:53:46.480913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.362 [2024-11-28 11:53:46.480963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.362 [2024-11-28 11:53:46.480989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.485422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.485454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.485480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.489923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.489957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.489983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 7099.00 IOPS, 887.38 MiB/s [2024-11-28T11:53:46.749Z] [2024-11-28 11:53:46.495690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.495719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.495729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.499914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.499947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.499958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.504161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.504194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.504205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.508441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.508474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.512708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.512741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.512752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.517000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.517033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.517043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.521267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.521326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.521355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.525587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.525619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.529918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.529952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.529962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.534158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.534191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.534202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.538552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.538583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.538594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.542781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.542843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.547212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.547257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.551446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.551478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.551488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.555722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.555755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.555765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.559902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.559934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.559946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.564181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.564215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.564226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.568443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.568476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.568487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.572730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.572763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.572774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.577010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.577043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.577054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.581275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.581334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.585540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.585572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.585598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.589855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.589887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.623 [2024-11-28 11:53:46.589913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.623 [2024-11-28 11:53:46.594133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.623 [2024-11-28 11:53:46.594166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.594177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.598401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.598433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.598444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.602698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.602733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.602744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.607012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.607045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.611315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.611357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.611385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.615585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.615618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.615629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.619878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.619910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.619921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.624193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.624227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.624238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.628394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.628426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.632666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.632698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.632709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.636953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.636985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.636996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.641288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.641347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.641374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.645551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.645610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.649807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.649840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.649851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.654051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.654083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.654094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.658191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.658224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.658235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.662434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.662466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.662485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.666749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.666799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.666841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.671105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.671164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.675464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.675496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.675507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.679706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.679738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.679749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.683994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.684027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.684038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.688188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.688221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.688232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.692419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.692451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.692462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.696633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.696665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.624 [2024-11-28 11:53:46.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.624 [2024-11-28 11:53:46.700893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.624 [2024-11-28 11:53:46.700925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.700936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.705147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.705191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.709457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.709489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.713753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.713786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.713796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.718002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.718034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.718045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.722276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.722320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.722349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.726608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.726643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.726669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.730948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.730979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.730990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.735212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.735245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.735255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.739467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.739499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.739510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.625 [2024-11-28 11:53:46.743976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.625 [2024-11-28 11:53:46.744041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.625 [2024-11-28 11:53:46.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.748471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.748504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.753019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.753061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.757244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.757276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.757288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.761502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.761535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.761561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.765806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.765839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.765850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.770088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.885 [2024-11-28 11:53:46.770121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.885 [2024-11-28 11:53:46.770132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.885 [2024-11-28 11:53:46.774358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.774391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.774402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.778635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.778670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.778681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.782928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.782960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.787223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.787256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.787267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.791522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.791554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.791565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.795787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.795831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.800084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.800117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.800128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.804380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.804413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.804423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.808583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.808616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.812916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.812950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.812960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.817200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.817233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.817244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.821644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.821695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.821721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.826107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.826140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.826150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.830404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.830436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.830447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.834660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.834693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.834720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.838981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.839013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.839024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.843284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.843327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.843355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.847552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.847584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.847595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.851877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.851909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.851920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.856124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.856157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.856168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.860369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.860400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.860411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.864674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.864707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.864718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.868947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.868980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.868991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.873240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.873273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.873284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.877486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.877519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.877545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.881716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.881750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.881761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.885968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.886 [2024-11-28 11:53:46.886001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.886 [2024-11-28 11:53:46.886012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.886 [2024-11-28 11:53:46.890179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.890223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.894469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.894542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.894568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.898825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.898872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.898913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.903117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.903149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.903159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.907438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.907470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.907481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.911730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.911762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.915974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.916017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.920206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.920238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.920249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.924482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.924525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.928678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.928710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.928720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.933019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.933051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.933062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.937374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.937406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.937432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.941659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.941809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.941825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.946081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.946112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.950339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.950373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.950384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.954698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.954733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.954760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.959047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.959079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.959090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.963367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.963400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.963410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.967649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.967684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.967695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.971905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.971937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.971964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.976188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.976221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.976232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.980503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.980552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.980578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.985064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.985098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.985110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.989316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.989348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.989374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.993625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.993658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.993684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:46.997937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:46.997970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:46.997980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.887 [2024-11-28 11:53:47.002253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.887 [2024-11-28 11:53:47.002287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.887 [2024-11-28 11:53:47.002330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.888 [2024-11-28 11:53:47.006839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:16.888 [2024-11-28 11:53:47.007053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.888 [2024-11-28 11:53:47.007069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.147 [2024-11-28 11:53:47.011529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.147 [2024-11-28 11:53:47.011562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.147 [2024-11-28 11:53:47.011573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.147 [2024-11-28 11:53:47.015977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.147 [2024-11-28 11:53:47.016010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.147 [2024-11-28 11:53:47.016021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.147 [2024-11-28 11:53:47.020216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.147 [2024-11-28 11:53:47.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.147 [2024-11-28 11:53:47.020260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.147 [2024-11-28 11:53:47.024491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.147 [2024-11-28 11:53:47.024523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.147 [2024-11-28 11:53:47.024534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.147 [2024-11-28 11:53:47.028767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.147 [2024-11-28 11:53:47.028800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.028810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.033079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.033112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.033123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.037405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.037437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.037463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.041595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.041628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.041655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.045952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.045987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.045999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.050184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.050217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.050228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.054680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.054719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.054732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.059030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.059063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.059074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.063421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.063454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.063466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.067655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.067688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.067699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.071972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.072005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.072031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.076234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.076267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.076278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.080527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.080560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.080571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.084807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.084851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.089089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.089121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.089132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.093450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.093482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.093508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.097750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.097783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.101993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.102026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.102038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.106255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.106289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.106344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.110585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.110622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.115041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.115074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.115085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.119255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.119288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.119309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.123454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.123485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.123496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.127624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.127657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.127668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.131871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.131904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.136124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.136156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.136182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.140487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.148 [2024-11-28 11:53:47.140530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.148 [2024-11-28 11:53:47.144813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.148 [2024-11-28 11:53:47.144846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.144857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.149145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.149178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.149189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.153426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.153458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.157736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.157768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.157780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.161987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.162021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.162032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.166237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.166269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.166280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.170511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.170544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.174904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.174937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.174948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.179131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.179165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.179175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.183426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.183470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.187589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.187622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.187633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.191782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.191815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.191825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.196010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.196043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.196053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.200195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.200227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.200238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.204646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.204695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.204721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.209183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.209216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.209243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.213496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.213527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.213553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.218368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.218444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.218457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.223492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.223540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.227976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.228035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.232356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.232390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.232417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.236817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.236999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.237015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.241865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.241911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.241922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.246462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.246526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.246538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.251138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.251166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.251176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.255732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.255761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.255770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.260082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.260111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.149 [2024-11-28 11:53:47.260121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.149 [2024-11-28 11:53:47.264683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.149 [2024-11-28 11:53:47.264711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.150 [2024-11-28 11:53:47.264720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.150 [2024-11-28 11:53:47.269287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.150 [2024-11-28 11:53:47.269340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.150 [2024-11-28 11:53:47.269351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.273984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.274028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.274039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.278946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.278990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.279000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.283446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.283491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.283502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.288049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.288094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.288105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.292785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.292826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.297417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.297462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.302007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.302036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.302046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.306543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.306573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.306599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.311009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.311038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.311048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.315455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.315500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.315510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.319752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.319780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.319790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.324122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.324151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.324161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.328326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.328354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.328363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.332555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.332583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.332593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.336885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.336914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.336924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.341140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.341168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.341178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.345439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.345482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.345493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.349870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.349899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.349908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.354039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.354067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.354078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.358272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.358313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.358324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.362749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.362780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.362790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.367249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.367288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.371534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.371564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.371575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.375997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.376042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.376053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.410 [2024-11-28 11:53:47.380385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.410 [2024-11-28 11:53:47.380413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-28 11:53:47.380424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.384687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.384716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.384726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.388909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.388937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.388947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.393359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.393390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.393401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.397765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.397795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.397805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.402028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.402057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.402067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.406341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.406384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.406395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.410664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.410704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.415000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.415029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.415038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.419256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.419284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.419305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.423457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.423485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.423494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.427727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.427756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.427766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.432018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.432046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.432056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.436214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.436243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.436253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.440437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.440465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.440475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.444662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.444691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.444700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.448933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.448962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.448972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.453168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.453196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.453206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.457418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.457472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.461703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.461742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.465946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.465975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.465984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.470185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.470214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.470223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.474512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.474556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.474567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.478854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.478898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.478922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.483081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.483119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.487321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.487357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.487367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.411 [2024-11-28 11:53:47.491490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.491518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.491527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.411 7130.00 IOPS, 891.25 MiB/s [2024-11-28T11:53:47.537Z] [2024-11-28 11:53:47.496539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1868bb0) 00:23:17.411 [2024-11-28 11:53:47.496567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-28 11:53:47.496577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:17.411 00:23:17.412 Latency(us) 00:23:17.412 [2024-11-28T11:53:47.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.412 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:17.412 nvme0n1 : 2.00 7127.29 890.91 0.00 0.00 2241.66 2010.76 9353.77 00:23:17.412 [2024-11-28T11:53:47.538Z] =================================================================================================================== 00:23:17.412 [2024-11-28T11:53:47.538Z] Total : 7127.29 890.91 0.00 0.00 2241.66 2010.76 9353.77 00:23:17.412 { 00:23:17.412 "results": [ 00:23:17.412 { 00:23:17.412 "job": "nvme0n1", 00:23:17.412 "core_mask": "0x2", 00:23:17.412 "workload": "randread", 00:23:17.412 "status": "finished", 00:23:17.412 "queue_depth": 16, 00:23:17.412 "io_size": 131072, 00:23:17.412 "runtime": 2.003004, 00:23:17.412 "iops": 7127.294803205586, 00:23:17.412 "mibps": 890.9118504006982, 00:23:17.412 "io_failed": 0, 00:23:17.412 "io_timeout": 0, 00:23:17.412 "avg_latency_us": 2241.660912402252, 00:23:17.412 "min_latency_us": 2010.7636363636364, 00:23:17.412 "max_latency_us": 9353.774545454546 00:23:17.412 } 00:23:17.412 ], 00:23:17.412 "core_count": 1 00:23:17.412 } 00:23:17.412 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:17.412 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:17.412 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:17.412 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:17.412 | .driver_specific 00:23:17.412 | .nvme_error 00:23:17.412 | .status_code 00:23:17.412 | .command_transient_transport_error' 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 461 > 0 )) 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97324 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97324 ']' 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97324 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.680 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97324 00:23:17.940 killing process with pid 97324 00:23:17.940 Received shutdown signal, test time was about 2.000000 seconds 00:23:17.940 00:23:17.940 Latency(us) 00:23:17.940 [2024-11-28T11:53:48.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.940 [2024-11-28T11:53:48.066Z] =================================================================================================================== 00:23:17.940 [2024-11-28T11:53:48.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97324' 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97324 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97324 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:17.940 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97371 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97371 /var/tmp/bperf.sock 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97371 ']' 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:17.941 11:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.941 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:17.941 [2024-11-28 11:53:48.054820] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:17.941 [2024-11-28 11:53:48.054909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97371 ] 00:23:18.199 [2024-11-28 11:53:48.180342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:18.199 [2024-11-28 11:53:48.205872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.199 [2024-11-28 11:53:48.237124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.199 [2024-11-28 11:53:48.287957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:18.459 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:18.718 nvme0n1 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:18.978 11:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:18.978 Running I/O for 2 seconds... 00:23:18.978 [2024-11-28 11:53:49.007245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb048 00:23:18.978 [2024-11-28 11:53:49.008457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.008490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.020479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb8b8 00:23:18.978 [2024-11-28 11:53:49.021650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.021680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.033588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc128 00:23:18.978 [2024-11-28 11:53:49.034777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.034822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.046986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc998 00:23:18.978 [2024-11-28 11:53:49.048118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.048161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.060233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efd208 00:23:18.978 [2024-11-28 11:53:49.061370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.061399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.073430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efda78 00:23:18.978 [2024-11-28 11:53:49.074541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.074586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.086694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efe2e8 00:23:18.978 [2024-11-28 11:53:49.087803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.978 [2024-11-28 11:53:49.087831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.978 [2024-11-28 11:53:49.100027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efeb58 00:23:18.978 [2024-11-28 11:53:49.101115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.979 [2024-11-28 11:53:49.101146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.238 [2024-11-28 11:53:49.119168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efef90 00:23:19.238 [2024-11-28 11:53:49.121206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-28 11:53:49.121234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.238 [2024-11-28 11:53:49.132409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efeb58 00:23:19.238 [2024-11-28 11:53:49.134424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-28 11:53:49.134452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.238 [2024-11-28 11:53:49.145690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efe2e8 00:23:19.238 [2024-11-28 11:53:49.147711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-28 11:53:49.147740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.238 [2024-11-28 11:53:49.158878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efda78 00:23:19.238 [2024-11-28 11:53:49.160868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.172056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efd208 00:23:19.239 [2024-11-28 11:53:49.174028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.174055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.185260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc998 00:23:19.239 [2024-11-28 11:53:49.187251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.187280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.198411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc128 00:23:19.239 [2024-11-28 11:53:49.200362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.200390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.211589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb8b8 00:23:19.239 [2024-11-28 11:53:49.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.213562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.224775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb048 00:23:19.239 [2024-11-28 11:53:49.226702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.226746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.237981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efa7d8 00:23:19.239 [2024-11-28 11:53:49.239894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.239922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.251269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef9f68 00:23:19.239 [2024-11-28 11:53:49.253147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.264638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef96f8 00:23:19.239 [2024-11-28 11:53:49.266537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.266582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.277927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8e88 00:23:19.239 [2024-11-28 11:53:49.279790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.279818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.291083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8618 00:23:19.239 [2024-11-28 11:53:49.292929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.292956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.304137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef7da8 00:23:19.239 [2024-11-28 11:53:49.305963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.305990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.317317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef7538 00:23:19.239 [2024-11-28 11:53:49.319115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.319142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.330795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef6cc8 00:23:19.239 [2024-11-28 11:53:49.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.332660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.344982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef6458 00:23:19.239 [2024-11-28 11:53:49.346820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.239 [2024-11-28 11:53:49.358839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef5be8 00:23:19.239 [2024-11-28 11:53:49.360620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-28 11:53:49.360697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.372791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef5378 00:23:19.499 [2024-11-28 11:53:49.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.386310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef4b08 00:23:19.499 [2024-11-28 11:53:49.388040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.388067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.399521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef4298 00:23:19.499 [2024-11-28 11:53:49.401222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.401249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.412616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef3a28 00:23:19.499 [2024-11-28 11:53:49.414335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.414377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.425883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef31b8 00:23:19.499 [2024-11-28 11:53:49.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.427625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.439012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef2948 00:23:19.499 [2024-11-28 11:53:49.440694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.452148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef20d8 00:23:19.499 [2024-11-28 11:53:49.453817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.453846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.465324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef1868 00:23:19.499 [2024-11-28 11:53:49.466957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.478473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef0ff8 00:23:19.499 [2024-11-28 11:53:49.480109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.480136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.491819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef0788 00:23:19.499 [2024-11-28 11:53:49.493428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.493470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.504967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeff18 00:23:19.499 [2024-11-28 11:53:49.506580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.506623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.518089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eef6a8 00:23:19.499 [2024-11-28 11:53:49.519692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.531264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeee38 00:23:19.499 [2024-11-28 11:53:49.532827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.532853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.544473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eee5c8 00:23:19.499 [2024-11-28 11:53:49.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.546087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.557746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eedd58 00:23:19.499 [2024-11-28 11:53:49.559269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.559325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.570890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eed4e8 00:23:19.499 [2024-11-28 11:53:49.572413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.584077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eecc78 00:23:19.499 [2024-11-28 11:53:49.585583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.585626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.597217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eec408 00:23:19.499 [2024-11-28 11:53:49.598712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.598742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.499 [2024-11-28 11:53:49.610665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eebb98 00:23:19.499 [2024-11-28 11:53:49.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.499 [2024-11-28 11:53:49.612183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.624900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeb328 00:23:19.759 [2024-11-28 11:53:49.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.626430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.638936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeaab8 00:23:19.759 [2024-11-28 11:53:49.640471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.640502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.652803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eea248 00:23:19.759 [2024-11-28 11:53:49.654290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.654341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.666099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee99d8 00:23:19.759 [2024-11-28 11:53:49.667633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.667675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.679625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee9168 00:23:19.759 [2024-11-28 11:53:49.681005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.681032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.693236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee88f8 00:23:19.759 [2024-11-28 11:53:49.694638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.694682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.706570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee8088 00:23:19.759 [2024-11-28 11:53:49.707922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.707948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.719892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee7818 00:23:19.759 [2024-11-28 11:53:49.721225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.721252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.733250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee6fa8 00:23:19.759 [2024-11-28 11:53:49.734611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.734656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.746610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee6738 00:23:19.759 [2024-11-28 11:53:49.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.747941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.759740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee5ec8 00:23:19.759 [2024-11-28 11:53:49.761054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.759 [2024-11-28 11:53:49.761082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.759 [2024-11-28 11:53:49.772934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee5658 00:23:19.759 [2024-11-28 11:53:49.774222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.774252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.786322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee4de8 00:23:19.760 [2024-11-28 11:53:49.787592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.787619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.799740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee4578 00:23:19.760 [2024-11-28 11:53:49.800983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.801010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.813269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee3d08 00:23:19.760 [2024-11-28 11:53:49.814526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.814555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.826666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee3498 00:23:19.760 [2024-11-28 11:53:49.827890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.827917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.839879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee2c28 00:23:19.760 [2024-11-28 11:53:49.841077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.841104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.853155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee23b8 00:23:19.760 [2024-11-28 11:53:49.854382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.854432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.866437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee1b48 00:23:19.760 [2024-11-28 11:53:49.867619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.867647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.760 [2024-11-28 11:53:49.879566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee12d8 00:23:19.760 [2024-11-28 11:53:49.880733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.760 [2024-11-28 11:53:49.880791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:20.019 [2024-11-28 11:53:49.893146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee0a68 00:23:20.019 [2024-11-28 11:53:49.894296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.019 [2024-11-28 11:53:49.894347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:20.019 [2024-11-28 11:53:49.906380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee01f8 00:23:20.019 [2024-11-28 11:53:49.907512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.019 [2024-11-28 11:53:49.907539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:20.019 [2024-11-28 11:53:49.919683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016edf988 00:23:20.019 [2024-11-28 11:53:49.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.019 [2024-11-28 11:53:49.920819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:20.019 [2024-11-28 11:53:49.932876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016edf118 00:23:20.019 [2024-11-28 11:53:49.933965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.019 [2024-11-28 11:53:49.933993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:20.019 [2024-11-28 11:53:49.946099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ede8a8 00:23:20.019 [2024-11-28 11:53:49.947195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:49.947223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:49.959293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ede038 00:23:20.020 [2024-11-28 11:53:49.960369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:49.960404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:49.977893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ede038 00:23:20.020 [2024-11-28 11:53:49.979932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:49.979961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:49.991017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ede8a8 00:23:20.020 [2024-11-28 11:53:49.994189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:49.994217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:20.020 18851.00 IOPS, 73.64 MiB/s [2024-11-28T11:53:50.146Z] [2024-11-28 11:53:50.005502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016edf118 00:23:20.020 [2024-11-28 11:53:50.007506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.007533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.018578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016edf988 00:23:20.020 [2024-11-28 11:53:50.020560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.020588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.031685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee01f8 00:23:20.020 [2024-11-28 11:53:50.033653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.033680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.044804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee0a68 00:23:20.020 [2024-11-28 11:53:50.046779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.046823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.057927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee12d8 00:23:20.020 [2024-11-28 11:53:50.059872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.059898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.071066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee1b48 00:23:20.020 [2024-11-28 11:53:50.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.084413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee23b8 00:23:20.020 [2024-11-28 11:53:50.086322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.086349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.097573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee2c28 00:23:20.020 [2024-11-28 11:53:50.099474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.099517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.110758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee3498 00:23:20.020 [2024-11-28 11:53:50.112647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.112689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.123951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee3d08 00:23:20.020 [2024-11-28 11:53:50.125812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.125839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:20.020 [2024-11-28 11:53:50.137069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee4578 00:23:20.020 [2024-11-28 11:53:50.138929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.020 [2024-11-28 11:53:50.138973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.150988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee4de8 00:23:20.280 [2024-11-28 11:53:50.152822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.152849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.164198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee5658 00:23:20.280 [2024-11-28 11:53:50.166018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.166045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.177584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee5ec8 00:23:20.280 [2024-11-28 11:53:50.179521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.179563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.190889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee6738 00:23:20.280 [2024-11-28 11:53:50.192689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.192715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.204026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee6fa8 00:23:20.280 [2024-11-28 11:53:50.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.205822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.217318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee7818 00:23:20.280 [2024-11-28 11:53:50.219067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.219095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.230542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee8088 00:23:20.280 [2024-11-28 11:53:50.232262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.232288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:20.280 [2024-11-28 11:53:50.243633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee88f8 00:23:20.280 [2024-11-28 11:53:50.245356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.280 [2024-11-28 11:53:50.245398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.256862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee9168 00:23:20.281 [2024-11-28 11:53:50.258588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.258634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.270071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ee99d8 00:23:20.281 [2024-11-28 11:53:50.271776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.271804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.283339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eea248 00:23:20.281 [2024-11-28 11:53:50.285002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.285030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.296500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeaab8 00:23:20.281 [2024-11-28 11:53:50.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.298174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.309704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeb328 00:23:20.281 [2024-11-28 11:53:50.311362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.311413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.322883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eebb98 00:23:20.281 [2024-11-28 11:53:50.324517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.324546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.336072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eec408 00:23:20.281 [2024-11-28 11:53:50.337713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.337740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.349705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eecc78 00:23:20.281 [2024-11-28 11:53:50.351351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.351373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.363807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eed4e8 00:23:20.281 [2024-11-28 11:53:50.365403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.365462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.377824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eedd58 00:23:20.281 [2024-11-28 11:53:50.379512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:20.281 [2024-11-28 11:53:50.391429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eee5c8 00:23:20.281 [2024-11-28 11:53:50.392981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.281 [2024-11-28 11:53:50.393009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.404960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeee38 00:23:20.541 [2024-11-28 11:53:50.406542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.418693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eef6a8 00:23:20.541 [2024-11-28 11:53:50.420203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.420231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.431839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016eeff18 00:23:20.541 [2024-11-28 11:53:50.433349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.433392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.445006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef0788 00:23:20.541 [2024-11-28 11:53:50.446527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.446571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.458112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef0ff8 00:23:20.541 [2024-11-28 11:53:50.459657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.459699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.471337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef1868 00:23:20.541 [2024-11-28 11:53:50.472784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.472811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.484449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef20d8 00:23:20.541 [2024-11-28 11:53:50.485882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.485910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.497668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef2948 00:23:20.541 [2024-11-28 11:53:50.499098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.510824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef31b8 00:23:20.541 [2024-11-28 11:53:50.512240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.512267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.523911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef3a28 00:23:20.541 [2024-11-28 11:53:50.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.525373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.537039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef4298 00:23:20.541 [2024-11-28 11:53:50.538431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.538470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.550191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef4b08 00:23:20.541 [2024-11-28 11:53:50.551579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.551608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.563314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef5378 00:23:20.541 [2024-11-28 11:53:50.564667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.564694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.576446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef5be8 00:23:20.541 [2024-11-28 11:53:50.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.577802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.589603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef6458 00:23:20.541 [2024-11-28 11:53:50.590924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.590951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.602773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef6cc8 00:23:20.541 [2024-11-28 11:53:50.604071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.604100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.615991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef7538 00:23:20.541 [2024-11-28 11:53:50.617285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.617319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.629083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef7da8 00:23:20.541 [2024-11-28 11:53:50.630381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.541 [2024-11-28 11:53:50.630431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:20.541 [2024-11-28 11:53:50.642259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8618 00:23:20.542 [2024-11-28 11:53:50.643535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.542 [2024-11-28 11:53:50.643561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:20.542 [2024-11-28 11:53:50.655387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8e88 00:23:20.542 [2024-11-28 11:53:50.656636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.542 [2024-11-28 11:53:50.656678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.668825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef96f8 00:23:20.801 [2024-11-28 11:53:50.670104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.801 [2024-11-28 11:53:50.670131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.682145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef9f68 00:23:20.801 [2024-11-28 11:53:50.683427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.801 [2024-11-28 11:53:50.683469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.695362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efa7d8 00:23:20.801 [2024-11-28 11:53:50.696553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.801 [2024-11-28 11:53:50.696595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.708495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb048 00:23:20.801 [2024-11-28 11:53:50.709682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.801 [2024-11-28 11:53:50.709711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.721647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb8b8 00:23:20.801 [2024-11-28 11:53:50.722886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.801 [2024-11-28 11:53:50.722913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-28 11:53:50.734879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc128 00:23:20.802 [2024-11-28 11:53:50.736023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.736066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.748067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc998 00:23:20.802 [2024-11-28 11:53:50.749211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.749238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.761214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efd208 00:23:20.802 [2024-11-28 11:53:50.762340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.774283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efda78 00:23:20.802 [2024-11-28 11:53:50.775410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.775437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.787389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efe2e8 00:23:20.802 [2024-11-28 11:53:50.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.788505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.800444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efeb58 00:23:20.802 [2024-11-28 11:53:50.801536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.819458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efef90 00:23:20.802 [2024-11-28 11:53:50.821543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.821572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.833440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efeb58 00:23:20.802 [2024-11-28 11:53:50.835478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.835522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.846971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efe2e8 00:23:20.802 [2024-11-28 11:53:50.848966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.848992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.860580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efda78 00:23:20.802 [2024-11-28 11:53:50.862615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.862659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.873910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efd208 00:23:20.802 [2024-11-28 11:53:50.876009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.876038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.887600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc998 00:23:20.802 [2024-11-28 11:53:50.889557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.889600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.901162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efc128 00:23:20.802 [2024-11-28 11:53:50.903172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.903199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:20.802 [2024-11-28 11:53:50.914632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb8b8 00:23:20.802 [2024-11-28 11:53:50.916560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.802 [2024-11-28 11:53:50.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.928242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efb048 00:23:21.061 [2024-11-28 11:53:50.930173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.930201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.941633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016efa7d8 00:23:21.061 [2024-11-28 11:53:50.943594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.943637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.954924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef9f68 00:23:21.061 [2024-11-28 11:53:50.956837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.956864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.968200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef96f8 00:23:21.061 [2024-11-28 11:53:50.970120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.981871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8e88 00:23:21.061 [2024-11-28 11:53:50.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.983784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.061 [2024-11-28 11:53:50.994056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1330) with pdu=0x200016ef8618 00:23:21.061 18976.50 IOPS, 74.13 MiB/s [2024-11-28T11:53:51.187Z] [2024-11-28 11:53:50.994940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.061 [2024-11-28 11:53:50.994961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.061 00:23:21.061 Latency(us) 00:23:21.061 [2024-11-28T11:53:51.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.061 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:21.061 nvme0n1 : 2.00 18995.09 74.20 0.00 0.00 6730.02 4259.84 26095.24 00:23:21.061 [2024-11-28T11:53:51.187Z] =================================================================================================================== 00:23:21.061 [2024-11-28T11:53:51.187Z] Total : 18995.09 74.20 0.00 0.00 6730.02 4259.84 26095.24 00:23:21.061 { 00:23:21.061 "results": [ 00:23:21.061 { 00:23:21.061 "job": "nvme0n1", 00:23:21.061 "core_mask": "0x2", 00:23:21.061 "workload": "randwrite", 00:23:21.061 "status": "finished", 00:23:21.062 "queue_depth": 128, 00:23:21.062 "io_size": 4096, 00:23:21.062 "runtime": 2.004781, 00:23:21.062 "iops": 18995.092232019357, 00:23:21.062 "mibps": 74.19957903132561, 00:23:21.062 "io_failed": 0, 00:23:21.062 "io_timeout": 0, 00:23:21.062 "avg_latency_us": 6730.022587833112, 00:23:21.062 "min_latency_us": 4259.84, 00:23:21.062 "max_latency_us": 26095.243636363637 00:23:21.062 } 00:23:21.062 ], 00:23:21.062 "core_count": 1 00:23:21.062 } 00:23:21.062 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:21.062 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:21.062 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:21.062 | .driver_specific 00:23:21.062 | .nvme_error 00:23:21.062 | .status_code 00:23:21.062 | .command_transient_transport_error' 00:23:21.062 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97371 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97371 ']' 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97371 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97371 00:23:21.320 killing process with pid 97371 00:23:21.320 Received shutdown signal, test time was about 2.000000 seconds 00:23:21.320 00:23:21.320 Latency(us) 00:23:21.320 [2024-11-28T11:53:51.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.320 [2024-11-28T11:53:51.446Z] =================================================================================================================== 00:23:21.320 [2024-11-28T11:53:51.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97371' 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97371 00:23:21.320 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97371 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97424 00:23:21.579 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97424 /var/tmp/bperf.sock 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97424 ']' 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:21.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.580 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:21.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:21.580 Zero copy mechanism will not be used. 00:23:21.580 [2024-11-28 11:53:51.556875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:21.580 [2024-11-28 11:53:51.556963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97424 ] 00:23:21.580 [2024-11-28 11:53:51.682040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:21.839 [2024-11-28 11:53:51.705572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.839 [2024-11-28 11:53:51.737392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.839 [2024-11-28 11:53:51.787975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:21.839 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.839 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:21.839 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:21.839 11:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:22.097 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:22.356 nvme0n1 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:22.356 11:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:22.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:22.617 Zero copy mechanism will not be used. 00:23:22.617 Running I/O for 2 seconds... 00:23:22.617 [2024-11-28 11:53:52.533311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.533453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.533482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.538943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.539126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.539148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.544334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.544527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.544550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.549717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.549871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.549892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.555074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.555209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.555230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.560342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.560492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.560513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.565686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.565834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.565855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.571111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.571311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.571333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.576527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.576706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.576727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.581801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.581980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.582000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.587229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.587386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.587408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.592488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.592685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.592721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.597858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.597994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.598015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.603173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.603361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.603383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.608484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.608646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.608667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.613878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.614033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.614055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.619279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.619429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.619450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.624592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.624728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.624748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.629834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.630012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.630032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.635289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.635476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.635496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.640594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.640739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.640760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.645935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.646092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.617 [2024-11-28 11:53:52.646112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.617 [2024-11-28 11:53:52.651257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.617 [2024-11-28 11:53:52.651411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.651431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.656506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.656656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.656676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.661728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.661875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.661897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.666994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.667175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.667206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.672350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.672503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.672523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.677655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.677828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.677850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.682959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.683092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.683112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.688208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.688356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.688377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.693454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.693612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.693634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.698798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.698963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.698983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.704054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.704200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.704221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.709311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.709485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.714602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.714737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.714778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.719892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.720053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.725164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.725363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.725385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.730463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.730690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.730711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.618 [2024-11-28 11:53:52.735807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.618 [2024-11-28 11:53:52.736055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.618 [2024-11-28 11:53:52.736077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.741459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.741669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.741690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.747101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.747265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.747286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.752464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.752616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.752637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.757809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.757988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.758019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.763188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.763359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.763393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.768474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.878 [2024-11-28 11:53:52.768652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.878 [2024-11-28 11:53:52.768672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.878 [2024-11-28 11:53:52.773841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.773986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.774006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.779210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.779347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.779369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.784500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.784648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.784668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.789826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.789991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.790011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.795109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.800363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.800518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.805689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.805863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.805893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.811124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.811256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.811277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.816402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.821703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.821858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.821878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.827032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.827180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.827200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.832223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.832370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.832391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.838161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.838386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.838408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.843584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.843807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.843841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.848964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.849061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.854666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.854733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.854757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.860219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.860290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.860314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.865510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.865590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.871061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.871155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.876413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.876495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.876515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.881706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.881784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.881804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.887077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.887153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.887174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.892369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.892493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.897644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.897722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.897743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.902956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.903043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.903063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.908257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.908352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.908373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.913588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.913670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.879 [2024-11-28 11:53:52.913690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.879 [2024-11-28 11:53:52.918906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.879 [2024-11-28 11:53:52.918983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.919004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.924194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.924271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.929471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.929548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.929568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.934831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.934904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.934925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.940196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.940272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.940293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.945510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.945611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.950963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.951036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.951057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.956272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.956359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.956380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.961610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.961707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.961728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.966982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.967058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.967079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.972283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.972372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.977603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.977677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.982875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.982950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.982971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.988164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.988246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.988268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.993410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.993486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.993506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:22.880 [2024-11-28 11:53:52.998838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:22.880 [2024-11-28 11:53:52.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-11-28 11:53:52.998938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.004362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.004441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.004461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.009824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.009903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.009924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.015216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.015332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.020603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.020675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.020696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.025870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.025950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.025971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.031217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.031289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.031310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.036489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.036564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.036585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.041749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.041850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.047124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.047202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.047222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.052379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.052464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.052485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.057729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.057807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.063116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.063188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.063208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.068437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.068529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.068549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.073705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.073780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.073801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.079028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.079107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.079128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.084344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.084433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.084454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.089743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.089822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.089843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.095137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.095214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.095235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.100491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.100585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.100606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.105834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.105924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.111153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.111231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.111252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.116456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.116539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.116559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.121729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.121805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.121825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.127065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.127157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.127178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.132387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.132478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.132499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.137778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.141 [2024-11-28 11:53:53.137870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.141 [2024-11-28 11:53:53.137890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.141 [2024-11-28 11:53:53.143142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.143229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.143250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.148451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.148543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.148563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.153746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.153823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.153843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.159210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.159339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.159361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.164811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.164896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.164917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.170333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.170439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.175817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.175903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.175923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.181516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.181600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.181623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.187040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.187118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.192750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.192822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.192843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.198168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.198259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.198280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.203575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.203676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.208980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.209069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.209089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.214530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.214601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.214623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.219920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.220010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.220031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.225236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.225332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.225353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.230610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.230684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.230705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.236022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.236097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.241316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.241392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.241413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.246700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.246775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.246795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.252003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.252121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.252144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.257446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.257542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.257563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.142 [2024-11-28 11:53:53.263118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.142 [2024-11-28 11:53:53.263195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.142 [2024-11-28 11:53:53.263215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.268782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.268861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.268882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.274426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.274546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.274569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.279866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.279946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.279966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.285316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.285406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.285427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.291080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.291166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.291190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.296577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.296657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.296693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.302064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.302171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.307553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.403 [2024-11-28 11:53:53.307688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.403 [2024-11-28 11:53:53.312941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.403 [2024-11-28 11:53:53.313031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.313052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.318387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.318455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.323742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.323819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.323839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.329139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.329215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.329235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.334602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.334677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.334698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.340001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.340078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.340098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.345452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.345524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.350951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.351032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.351052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.356437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.356544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.361967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.362045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.362065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.367330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.367415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.367436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.372768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.372844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.372865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.378129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.378209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.383643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.383754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.389095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.389172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.389193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.394439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.394535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.394556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.399831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.399910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.399931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.405119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.405198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.405218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.410517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.410624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.410646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.416419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.416502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.416525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.421958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.422044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.422081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.427687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.427796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.433325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.433413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.433450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.439001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.439081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.439102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.444561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.444654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.444689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.450101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.404 [2024-11-28 11:53:53.450177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.404 [2024-11-28 11:53:53.450197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.404 [2024-11-28 11:53:53.455538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.455629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.455665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.461081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.461153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.461173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.466590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.466660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.466682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.471939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.472031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.472051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.477305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.477383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.477404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.482637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.482725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.482747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.487990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.488069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.493327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.498664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.498790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.504040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.504118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.504138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.509472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.509573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.509594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.514748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.514826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.514846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.520073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.520159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.520180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.405 [2024-11-28 11:53:53.525634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.405 [2024-11-28 11:53:53.525715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.405 [2024-11-28 11:53:53.525736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 5707.00 IOPS, 713.38 MiB/s [2024-11-28T11:53:53.792Z] [2024-11-28 11:53:53.532277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.532395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.532418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.537688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.537766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.543062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.543134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.543155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.548466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.548557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.548578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.553788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.553880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.553901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.559185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.559278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.559298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.564577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.564661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.564681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.569952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.570027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.570048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.575273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.575378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.575400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.580684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.580758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.580779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.585931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.586013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.586034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.591276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.591388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.596539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.596626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.596646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.601856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.601933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.601953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.607143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.607215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.607236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.612565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.612645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.617859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.617969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.623235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.623333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.623355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.628598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.628674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.628695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.633897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.633988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.634009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.639229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.639327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.639348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.644560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.644652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.644673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.649946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.650023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.650044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.655282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.666 [2024-11-28 11:53:53.655376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.666 [2024-11-28 11:53:53.655408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.666 [2024-11-28 11:53:53.660519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.665863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.665939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.665960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.671194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.671266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.671286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.676503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.676582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.676603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.681820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.681895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.681916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.687179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.687254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.687275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.692525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.692605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.692625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.697839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.697916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.697937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.703136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.703215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.703236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.708430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.708517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.708538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.713823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.713899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.713920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.719163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.719254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.719274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.724440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.724518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.724539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.729753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.735114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.735225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.740450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.740541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.745794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.745878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.745899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.751208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.751283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.751320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.756587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.756666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.756687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.761898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.761978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.761998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.767261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.767356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.767377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.772540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.772632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.772652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.777824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.777899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.777920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.783158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.783250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.783271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.667 [2024-11-28 11:53:53.788713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.667 [2024-11-28 11:53:53.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.667 [2024-11-28 11:53:53.788824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.926 [2024-11-28 11:53:53.794164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.926 [2024-11-28 11:53:53.794266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.926 [2024-11-28 11:53:53.794288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.799728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.799798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.799819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.804972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.805058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.805078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.810313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.810388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.810409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.815676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.815775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.820965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.821046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.821067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.826253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.826348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.826370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.831747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.831825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.831846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.836963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.837055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.837075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.842354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.842435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.842456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.847735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.847811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.852952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.853034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.858285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.858386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.858407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.863669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.863772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.863793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.869005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.869116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.874284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.874408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.879615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.879737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.879757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.884961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.885063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.890375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.890455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.890484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.895741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.895821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.895841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.900969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.901063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.927 [2024-11-28 11:53:53.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.927 [2024-11-28 11:53:53.906275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.927 [2024-11-28 11:53:53.906363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.906384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.911633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.916907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.916994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.917014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.922205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.922287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.927608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.927688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.927724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.932974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.933053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.933074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.938277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.938367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.943630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.943730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.948895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.948974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.948994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.954145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.954224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.959547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.959641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.959662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.964880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.964956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.964976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.970163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.970240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.970260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.975530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.975615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.975636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.980889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.980966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.980986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.986194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.986276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.986296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.991585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.991706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:53.996844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:53.996920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:53.996940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:54.002082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:54.002174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:54.002194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:54.007407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:54.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:54.007515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:54.012749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.928 [2024-11-28 11:53:54.012828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.928 [2024-11-28 11:53:54.012848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.928 [2024-11-28 11:53:54.017989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.018074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.018095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.929 [2024-11-28 11:53:54.023354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.023453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.023475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.929 [2024-11-28 11:53:54.028661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.028737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.028758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.929 [2024-11-28 11:53:54.034011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.034094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.034115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.929 [2024-11-28 11:53:54.039412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.039491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.039512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.929 [2024-11-28 11:53:54.044657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:23.929 [2024-11-28 11:53:54.044734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.929 [2024-11-28 11:53:54.044755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.050140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.050248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.055516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.055615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.055637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.060889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.060973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.060993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.066127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.066204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.066224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.071500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.071595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.071616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.076739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.076816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.076836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.081982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.082055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.082075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.087236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.087334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.092502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.092576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.092596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.097811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.097889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.097908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.103154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.103244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.103264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.108503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.108586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.108606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.113784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.113865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.113887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.119149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.119230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.124473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.124550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.129709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.129784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.129806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.135046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.140282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.140383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.140404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.145528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.145610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.145630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.150869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.150945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.150965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.156206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.156292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.156326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.161465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.161540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.161561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.189 [2024-11-28 11:53:54.166697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.189 [2024-11-28 11:53:54.166778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.189 [2024-11-28 11:53:54.166799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.172125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.172211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.172231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.177451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.177535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.177555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.182752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.188051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.188131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.188152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.193363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.193451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.193471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.198658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.198734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.198755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.203985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.204062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.204082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.209248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.209338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.209358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.214534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.214609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.214630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.219839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.219916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.219936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.225088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.225168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.225188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.230357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.230432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.230452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.235702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.235789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.240943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.241031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.246314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.246387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.246408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.251576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.251675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.251710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.256887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.256972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.256993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.262197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.262288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.262319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.267574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.267666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.267687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.272868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.272956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.272976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.278143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.278221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.278241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.283450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.283532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.283553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.288646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.288734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.288755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.293867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.293941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.293962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.299107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.299189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.299210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.304422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.304510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.190 [2024-11-28 11:53:54.309747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.190 [2024-11-28 11:53:54.309834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.190 [2024-11-28 11:53:54.309856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.450 [2024-11-28 11:53:54.315277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.450 [2024-11-28 11:53:54.315367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.450 [2024-11-28 11:53:54.315400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.450 [2024-11-28 11:53:54.320669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.450 [2024-11-28 11:53:54.320744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.320764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.325868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.325947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.325967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.331213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.331299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.331336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.336532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.336619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.336640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.341851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.341932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.341952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.347210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.347285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.347320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.352568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.352645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.352664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.357756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.357832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.357852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.363073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.363165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.363186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.368336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.368413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.368433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.373676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.373753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.373773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.379135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.379254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.384641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.384743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.384764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.389959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.390047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.390067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.395531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.395627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.395665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.401125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.401413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.401452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.406628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.406696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.406718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.412062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.412148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.412169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.417457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.417552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.417573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.422877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.422982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.428207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.428280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.428301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.433629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.433707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.433728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.439286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.439420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.439453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.444844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.444932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.444954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.450536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.450608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.450631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.456257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.456356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.456380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.461810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.451 [2024-11-28 11:53:54.461895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.451 [2024-11-28 11:53:54.461917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.451 [2024-11-28 11:53:54.467454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.467558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.472982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.473073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.473093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.478587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.478663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.478686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.484215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.484296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.484361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.489651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.489746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.489766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.495029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.495106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.495126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.500373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.500459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.500480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.505725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.505822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.511027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.511099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.511120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.516416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.516528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.516563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.521780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 [2024-11-28 11:53:54.521859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.521880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.452 [2024-11-28 11:53:54.527150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ce1670) with pdu=0x200016eff3c8 00:23:24.452 5749.50 IOPS, 718.69 MiB/s [2024-11-28T11:53:54.578Z] [2024-11-28 11:53:54.528406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.452 [2024-11-28 11:53:54.528454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.452 00:23:24.452 Latency(us) 00:23:24.452 [2024-11-28T11:53:54.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:24.452 nvme0n1 : 2.00 5747.43 718.43 0.00 0.00 2778.46 2234.18 9592.09 00:23:24.452 [2024-11-28T11:53:54.578Z] =================================================================================================================== 00:23:24.452 [2024-11-28T11:53:54.578Z] Total : 5747.43 718.43 0.00 0.00 2778.46 2234.18 9592.09 00:23:24.452 { 00:23:24.452 "results": [ 00:23:24.452 { 00:23:24.452 "job": "nvme0n1", 00:23:24.452 "core_mask": "0x2", 00:23:24.452 "workload": "randwrite", 00:23:24.452 "status": "finished", 00:23:24.452 "queue_depth": 16, 00:23:24.452 "io_size": 131072, 00:23:24.452 "runtime": 2.003331, 00:23:24.452 "iops": 5747.427659233546, 00:23:24.452 "mibps": 718.4284574041933, 00:23:24.452 "io_failed": 0, 00:23:24.452 "io_timeout": 0, 00:23:24.452 "avg_latency_us": 2778.4562439401834, 00:23:24.452 "min_latency_us": 2234.181818181818, 00:23:24.452 "max_latency_us": 9592.087272727273 00:23:24.452 } 00:23:24.452 ], 00:23:24.452 "core_count": 1 00:23:24.452 } 00:23:24.452 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:24.452 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:24.452 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:24.452 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:24.452 | .driver_specific 00:23:24.452 | .nvme_error 00:23:24.452 | .status_code 00:23:24.452 | .command_transient_transport_error' 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97424 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97424 ']' 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97424 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97424 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:25.021 killing process with pid 97424 00:23:25.021 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97424' 00:23:25.021 Received shutdown signal, test time was about 2.000000 seconds 00:23:25.022 00:23:25.022 Latency(us) 00:23:25.022 [2024-11-28T11:53:55.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.022 [2024-11-28T11:53:55.148Z] =================================================================================================================== 00:23:25.022 [2024-11-28T11:53:55.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.022 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97424 00:23:25.022 11:53:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97424 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 97239 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97239 ']' 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97239 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97239 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.022 killing process with pid 97239 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97239' 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97239 00:23:25.022 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97239 00:23:25.280 ************************************ 00:23:25.280 END TEST nvmf_digest_error 00:23:25.280 ************************************ 00:23:25.280 00:23:25.280 real 0m15.448s 00:23:25.280 user 0m28.292s 00:23:25.280 sys 0m5.149s 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.280 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:23:25.539 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.539 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:23:25.539 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.540 rmmod nvme_tcp 00:23:25.540 rmmod nvme_fabrics 00:23:25.540 rmmod nvme_keyring 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 97239 ']' 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 97239 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 97239 ']' 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 97239 00:23:25.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (97239) - No such process 00:23:25.540 Process with pid 97239 is not found 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 97239 is not found' 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:25.540 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:23:25.799 00:23:25.799 real 0m32.258s 00:23:25.799 user 0m57.324s 00:23:25.799 sys 0m10.958s 00:23:25.799 ************************************ 00:23:25.799 END TEST nvmf_digest 00:23:25.799 ************************************ 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.799 ************************************ 00:23:25.799 START TEST nvmf_host_multipath 00:23:25.799 ************************************ 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:25.799 * Looking for test storage... 00:23:25.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.799 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.060 --rc genhtml_branch_coverage=1 00:23:26.060 --rc genhtml_function_coverage=1 00:23:26.060 --rc genhtml_legend=1 00:23:26.060 --rc geninfo_all_blocks=1 00:23:26.060 --rc geninfo_unexecuted_blocks=1 00:23:26.060 00:23:26.060 ' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.060 --rc genhtml_branch_coverage=1 00:23:26.060 --rc genhtml_function_coverage=1 00:23:26.060 --rc genhtml_legend=1 00:23:26.060 --rc geninfo_all_blocks=1 00:23:26.060 --rc geninfo_unexecuted_blocks=1 00:23:26.060 00:23:26.060 ' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.060 --rc genhtml_branch_coverage=1 00:23:26.060 --rc genhtml_function_coverage=1 00:23:26.060 --rc genhtml_legend=1 00:23:26.060 --rc geninfo_all_blocks=1 00:23:26.060 --rc geninfo_unexecuted_blocks=1 00:23:26.060 00:23:26.060 ' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:26.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.060 --rc genhtml_branch_coverage=1 00:23:26.060 --rc genhtml_function_coverage=1 00:23:26.060 --rc genhtml_legend=1 00:23:26.060 --rc geninfo_all_blocks=1 00:23:26.060 --rc geninfo_unexecuted_blocks=1 00:23:26.060 00:23:26.060 ' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.060 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.061 11:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:26.061 Cannot find device "nvmf_init_br" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:26.061 Cannot find device "nvmf_init_br2" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:26.061 Cannot find device "nvmf_tgt_br" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.061 Cannot find device "nvmf_tgt_br2" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:26.061 Cannot find device "nvmf_init_br" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:26.061 Cannot find device "nvmf_init_br2" 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:23:26.061 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:26.061 Cannot find device "nvmf_tgt_br" 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:26.062 Cannot find device "nvmf_tgt_br2" 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:26.062 Cannot find device "nvmf_br" 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:26.062 Cannot find device "nvmf_init_if" 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:26.062 Cannot find device "nvmf_init_if2" 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:26.062 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.321 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:26.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:26.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:23:26.322 00:23:26.322 --- 10.0.0.3 ping statistics --- 00:23:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.322 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:26.322 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:26.322 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:23:26.322 00:23:26.322 --- 10.0.0.4 ping statistics --- 00:23:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.322 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:26.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:26.322 00:23:26.322 --- 10.0.0.1 ping statistics --- 00:23:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.322 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:26.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:26.322 00:23:26.322 --- 10.0.0.2 ping statistics --- 00:23:26.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.322 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=97734 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 97734 00:23:26.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97734 ']' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.322 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:26.581 [2024-11-28 11:53:56.494992] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:26.581 [2024-11-28 11:53:56.495256] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.581 [2024-11-28 11:53:56.624195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:26.581 [2024-11-28 11:53:56.656189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:26.581 [2024-11-28 11:53:56.695638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.581 [2024-11-28 11:53:56.695974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.581 [2024-11-28 11:53:56.696001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.581 [2024-11-28 11:53:56.696013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.581 [2024-11-28 11:53:56.696022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.581 [2024-11-28 11:53:56.697321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.581 [2024-11-28 11:53:56.697361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.841 [2024-11-28 11:53:56.760859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=97734 00:23:26.841 11:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:27.100 [2024-11-28 11:53:57.168652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.100 11:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:27.360 Malloc0 00:23:27.360 11:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:27.619 11:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.878 11:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:28.138 [2024-11-28 11:53:58.125426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.138 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:28.397 [2024-11-28 11:53:58.345560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:28.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=97782 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 97782 /var/tmp/bdevperf.sock 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97782 ']' 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.397 11:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:29.333 11:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.333 11:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:29.333 11:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:29.593 11:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:29.853 Nvme0n1 00:23:29.853 11:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.112 Nvme0n1 00:23:30.112 11:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:23:30.112 11:54:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:31.518 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:31.518 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:31.518 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:31.778 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:31.778 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:31.778 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97829 00:23:31.778 11:54:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:38.363 11:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:38.363 11:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:38.363 11:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:38.363 11:54:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.363 Attaching 4 probes... 00:23:38.363 @path[10.0.0.3, 4421]: 17968 00:23:38.363 @path[10.0.0.3, 4421]: 18571 00:23:38.363 @path[10.0.0.3, 4421]: 18549 00:23:38.363 @path[10.0.0.3, 4421]: 18433 00:23:38.363 @path[10.0.0.3, 4421]: 18427 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97829 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:38.363 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:38.622 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:38.622 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97941 00:23:38.622 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:38.622 11:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:45.190 Attaching 4 probes... 00:23:45.190 @path[10.0.0.3, 4420]: 18210 00:23:45.190 @path[10.0.0.3, 4420]: 18449 00:23:45.190 @path[10.0.0.3, 4420]: 18470 00:23:45.190 @path[10.0.0.3, 4420]: 18521 00:23:45.190 @path[10.0.0.3, 4420]: 18540 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97941 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:45.190 11:54:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:45.190 11:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:45.449 11:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:45.449 11:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98058 00:23:45.449 11:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:45.449 11:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:52.018 Attaching 4 probes... 00:23:52.018 @path[10.0.0.3, 4421]: 14776 00:23:52.018 @path[10.0.0.3, 4421]: 18307 00:23:52.018 @path[10.0.0.3, 4421]: 18469 00:23:52.018 @path[10.0.0.3, 4421]: 18310 00:23:52.018 @path[10.0.0.3, 4421]: 18327 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98058 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:52.018 11:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:52.277 11:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:52.277 11:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98173 00:23:52.277 11:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:52.277 11:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:58.839 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:58.839 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:58.839 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.840 Attaching 4 probes... 00:23:58.840 00:23:58.840 00:23:58.840 00:23:58.840 00:23:58.840 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98173 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:58.840 11:54:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:59.099 11:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:59.099 11:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:59.099 11:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98284 00:23:59.099 11:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.668 Attaching 4 probes... 00:24:05.668 @path[10.0.0.3, 4421]: 17975 00:24:05.668 @path[10.0.0.3, 4421]: 18217 00:24:05.668 @path[10.0.0.3, 4421]: 18566 00:24:05.668 @path[10.0.0.3, 4421]: 18434 00:24:05.668 @path[10.0.0.3, 4421]: 18467 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98284 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:05.668 11:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:24:06.605 11:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:06.605 11:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98409 00:24:06.605 11:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:06.605 11:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.171 Attaching 4 probes... 00:24:13.171 @path[10.0.0.3, 4420]: 17318 00:24:13.171 @path[10.0.0.3, 4420]: 17709 00:24:13.171 @path[10.0.0.3, 4420]: 17696 00:24:13.171 @path[10.0.0.3, 4420]: 17739 00:24:13.171 @path[10.0.0.3, 4420]: 17599 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98409 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.171 11:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:13.171 [2024-11-28 11:54:43.204074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:13.171 11:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:13.430 11:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:24:20.002 11:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:20.003 11:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98578 00:24:20.003 11:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:20.003 11:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97734 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:26.586 Attaching 4 probes... 00:24:26.586 @path[10.0.0.3, 4421]: 18098 00:24:26.586 @path[10.0.0.3, 4421]: 18320 00:24:26.586 @path[10.0.0.3, 4421]: 18336 00:24:26.586 @path[10.0.0.3, 4421]: 18432 00:24:26.586 @path[10.0.0.3, 4421]: 18408 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98578 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 97782 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97782 ']' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97782 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97782 00:24:26.586 killing process with pid 97782 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97782' 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97782 00:24:26.586 11:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97782 00:24:26.586 { 00:24:26.586 "results": [ 00:24:26.586 { 00:24:26.586 "job": "Nvme0n1", 00:24:26.586 "core_mask": "0x4", 00:24:26.586 "workload": "verify", 00:24:26.586 "status": "terminated", 00:24:26.586 "verify_range": { 00:24:26.586 "start": 0, 00:24:26.586 "length": 16384 00:24:26.586 }, 00:24:26.586 "queue_depth": 128, 00:24:26.586 "io_size": 4096, 00:24:26.586 "runtime": 55.490961, 00:24:26.586 "iops": 7784.853464693106, 00:24:26.586 "mibps": 30.409583846457444, 00:24:26.586 "io_failed": 0, 00:24:26.586 "io_timeout": 0, 00:24:26.586 "avg_latency_us": 16409.020025636175, 00:24:26.586 "min_latency_us": 1087.3018181818181, 00:24:26.586 "max_latency_us": 7046430.72 00:24:26.586 } 00:24:26.586 ], 00:24:26.586 "core_count": 1 00:24:26.586 } 00:24:26.586 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 97782 00:24:26.586 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:26.586 [2024-11-28 11:53:58.423783] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:26.586 [2024-11-28 11:53:58.424393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97782 ] 00:24:26.586 [2024-11-28 11:53:58.550999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:26.586 [2024-11-28 11:53:58.581850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.586 [2024-11-28 11:53:58.622817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.586 [2024-11-28 11:53:58.681669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:26.586 Running I/O for 90 seconds... 00:24:26.586 9255.00 IOPS, 36.15 MiB/s [2024-11-28T11:54:56.712Z] 9203.00 IOPS, 35.95 MiB/s [2024-11-28T11:54:56.712Z] 9201.67 IOPS, 35.94 MiB/s [2024-11-28T11:54:56.712Z] 9219.50 IOPS, 36.01 MiB/s [2024-11-28T11:54:56.712Z] 9234.80 IOPS, 36.07 MiB/s [2024-11-28T11:54:56.712Z] 9231.67 IOPS, 36.06 MiB/s [2024-11-28T11:54:56.712Z] 9229.43 IOPS, 36.05 MiB/s [2024-11-28T11:54:56.712Z] 9217.75 IOPS, 36.01 MiB/s [2024-11-28T11:54:56.712Z] [2024-11-28 11:54:08.589194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.589668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.589812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.589908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.589990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.590149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.590304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.590526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.590721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.590891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.590968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.591044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.591115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.591215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.586 [2024-11-28 11:54:08.591290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.586 [2024-11-28 11:54:08.591391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.591476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.591557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.591631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.591705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.591772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.591846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.591915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.591993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.592014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.592047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.592271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.592289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.593842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.593854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.587 [2024-11-28 11:54:08.594289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.587 [2024-11-28 11:54:08.594631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.587 [2024-11-28 11:54:08.594650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.594886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.594931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.594960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.594984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.594996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.588 [2024-11-28 11:54:08.595635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.588 [2024-11-28 11:54:08.595865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.588 [2024-11-28 11:54:08.595877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.595906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.595923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.595936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.595952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.595982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.595994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.596052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.596087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.596118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.596365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.596379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.589 [2024-11-28 11:54:08.597739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.597972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:08.598862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:08.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.589 9191.89 IOPS, 35.91 MiB/s [2024-11-28T11:54:56.715Z] 9195.90 IOPS, 35.92 MiB/s [2024-11-28T11:54:56.715Z] 9199.00 IOPS, 35.93 MiB/s [2024-11-28T11:54:56.715Z] 9202.58 IOPS, 35.95 MiB/s [2024-11-28T11:54:56.715Z] 9206.69 IOPS, 35.96 MiB/s [2024-11-28T11:54:56.715Z] 9210.79 IOPS, 35.98 MiB/s [2024-11-28T11:54:56.715Z] [2024-11-28 11:54:15.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:15.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.589 [2024-11-28 11:54:15.130167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.589 [2024-11-28 11:54:15.130186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.130413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.130965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.130988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.590 [2024-11-28 11:54:15.131238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.131271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.590 [2024-11-28 11:54:15.131301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.590 [2024-11-28 11:54:15.131319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.131755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.131974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.131992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.132010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.591 [2024-11-28 11:54:15.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.591 [2024-11-28 11:54:15.132580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.591 [2024-11-28 11:54:15.132598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.132977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.133191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.133608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.134459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.592 [2024-11-28 11:54:15.134522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.134635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.592 [2024-11-28 11:54:15.134672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.592 [2024-11-28 11:54:15.134696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.134709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.134732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.134745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.134770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.134783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.134877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.134913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.134940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.134955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.134991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.135005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.135046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:15.135083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:15.135397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:15.135411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.593 9088.47 IOPS, 35.50 MiB/s [2024-11-28T11:54:56.719Z] 8630.38 IOPS, 33.71 MiB/s [2024-11-28T11:54:56.719Z] 8660.59 IOPS, 33.83 MiB/s [2024-11-28T11:54:56.719Z] 8687.89 IOPS, 33.94 MiB/s [2024-11-28T11:54:56.719Z] 8712.32 IOPS, 34.03 MiB/s [2024-11-28T11:54:56.719Z] 8737.50 IOPS, 34.13 MiB/s [2024-11-28T11:54:56.719Z] 8760.29 IOPS, 34.22 MiB/s [2024-11-28T11:54:56.719Z] [2024-11-28 11:54:22.252818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.252896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.252962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.252980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.593 [2024-11-28 11:54:22.253167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.593 [2024-11-28 11:54:22.253612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.593 [2024-11-28 11:54:22.253624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.253654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.253686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.253986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.253999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.594 [2024-11-28 11:54:22.254874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.254977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.254990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.255008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.255021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.255039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.594 [2024-11-28 11:54:22.255052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.594 [2024-11-28 11:54:22.255071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.255438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.255968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.595 [2024-11-28 11:54:22.255982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.595 [2024-11-28 11:54:22.256209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.595 [2024-11-28 11:54:22.256227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.256519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.256731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.256751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.596 [2024-11-28 11:54:22.257435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.257969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.257982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.258007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.258020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.258045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.258058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.258083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.596 [2024-11-28 11:54:22.258096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.596 [2024-11-28 11:54:22.258125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:22.258139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.597 8735.18 IOPS, 34.12 MiB/s [2024-11-28T11:54:56.723Z] 8355.39 IOPS, 32.64 MiB/s [2024-11-28T11:54:56.723Z] 8007.25 IOPS, 31.28 MiB/s [2024-11-28T11:54:56.723Z] 7686.96 IOPS, 30.03 MiB/s [2024-11-28T11:54:56.723Z] 7391.31 IOPS, 28.87 MiB/s [2024-11-28T11:54:56.723Z] 7117.56 IOPS, 27.80 MiB/s [2024-11-28T11:54:56.723Z] 6863.36 IOPS, 26.81 MiB/s [2024-11-28T11:54:56.723Z] 6645.69 IOPS, 25.96 MiB/s [2024-11-28T11:54:56.723Z] 6726.03 IOPS, 26.27 MiB/s [2024-11-28T11:54:56.723Z] 6804.29 IOPS, 26.58 MiB/s [2024-11-28T11:54:56.723Z] 6880.16 IOPS, 26.88 MiB/s [2024-11-28T11:54:56.723Z] 6950.94 IOPS, 27.15 MiB/s [2024-11-28T11:54:56.723Z] 7018.03 IOPS, 27.41 MiB/s [2024-11-28T11:54:56.723Z] 7079.00 IOPS, 27.65 MiB/s [2024-11-28T11:54:56.723Z] [2024-11-28 11:54:35.605146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.605511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.605996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.597 [2024-11-28 11:54:35.606029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.597 [2024-11-28 11:54:35.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.597 [2024-11-28 11:54:35.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.606461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.606487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.606538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.606565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.606973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.606992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.598 [2024-11-28 11:54:35.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.607463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.607488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.607514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.598 [2024-11-28 11:54:35.607540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.598 [2024-11-28 11:54:35.607554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.607857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.607883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.607909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.607934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.607997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.599 [2024-11-28 11:54:35.608276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608341] nvme_qpair.c: 474:spdk_nvme 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.599 _print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.599 [2024-11-28 11:54:35.608463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-11-28 11:54:35.608475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa840 is same with the state(6) to be set 00:24:26.600 [2024-11-28 11:54:35.608503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36832 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37288 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37296 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37304 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37312 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37320 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37328 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37336 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37344 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37352 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37360 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.608963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.608975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.608983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.608993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37368 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.609029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.609038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.609048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37376 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.609072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.609081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.609089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37384 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.609113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.609121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.609130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37392 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.609153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.609162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.609171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37400 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.609194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.600 [2024-11-28 11:54:35.609203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.600 [2024-11-28 11:54:35.609212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37408 len:8 PRP1 0x0 PRP2 0x0 00:24:26.600 [2024-11-28 11:54:35.609224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.610315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:26.600 [2024-11-28 11:54:35.610396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-11-28 11:54:35.610417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.600 [2024-11-28 11:54:35.610445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ebd50 (9): Bad file descriptor 00:24:26.600 [2024-11-28 11:54:35.610904] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.600 [2024-11-28 11:54:35.610937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ebd50 with addr=10.0.0.3, port=4421 00:24:26.600 [2024-11-28 11:54:35.610953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ebd50 is same with the state(6) to be set 00:24:26.600 [2024-11-28 11:54:35.611022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ebd50 (9): Bad file descriptor 00:24:26.600 [2024-11-28 11:54:35.611057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:26.600 [2024-11-28 11:54:35.611072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:26.600 [2024-11-28 11:54:35.611101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:26.600 [2024-11-28 11:54:35.611115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:26.600 [2024-11-28 11:54:35.611129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:26.600 7131.28 IOPS, 27.86 MiB/s [2024-11-28T11:54:56.726Z] 7171.41 IOPS, 28.01 MiB/s [2024-11-28T11:54:56.726Z] 7213.63 IOPS, 28.18 MiB/s [2024-11-28T11:54:56.726Z] 7255.95 IOPS, 28.34 MiB/s [2024-11-28T11:54:56.726Z] 7295.35 IOPS, 28.50 MiB/s [2024-11-28T11:54:56.726Z] 7333.22 IOPS, 28.65 MiB/s [2024-11-28T11:54:56.726Z] 7369.29 IOPS, 28.79 MiB/s [2024-11-28T11:54:56.726Z] 7400.33 IOPS, 28.91 MiB/s [2024-11-28T11:54:56.726Z] 7431.77 IOPS, 29.03 MiB/s [2024-11-28T11:54:56.726Z] 7463.24 IOPS, 29.15 MiB/s [2024-11-28T11:54:56.726Z] [2024-11-28 11:54:45.663516] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:26.600 7497.15 IOPS, 29.29 MiB/s [2024-11-28T11:54:56.726Z] 7535.43 IOPS, 29.44 MiB/s [2024-11-28T11:54:56.726Z] 7572.44 IOPS, 29.58 MiB/s [2024-11-28T11:54:56.726Z] 7605.98 IOPS, 29.71 MiB/s [2024-11-28T11:54:56.726Z] 7633.22 IOPS, 29.82 MiB/s [2024-11-28T11:54:56.726Z] 7662.69 IOPS, 29.93 MiB/s [2024-11-28T11:54:56.726Z] 7691.02 IOPS, 30.04 MiB/s [2024-11-28T11:54:56.726Z] 7720.85 IOPS, 30.16 MiB/s [2024-11-28T11:54:56.727Z] 7747.94 IOPS, 30.27 MiB/s [2024-11-28T11:54:56.727Z] 7774.64 IOPS, 30.37 MiB/s [2024-11-28T11:54:56.727Z] Received shutdown signal, test time was about 55.491746 seconds 00:24:26.601 00:24:26.601 Latency(us) 00:24:26.601 [2024-11-28T11:54:56.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.601 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.601 Verification LBA range: start 0x0 length 0x4000 00:24:26.601 Nvme0n1 : 55.49 7784.85 30.41 0.00 0.00 16409.02 1087.30 7046430.72 00:24:26.601 [2024-11-28T11:54:56.727Z] =================================================================================================================== 00:24:26.601 [2024-11-28T11:54:56.727Z] Total : 7784.85 30.41 0.00 0.00 16409.02 1087.30 7046430.72 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.601 rmmod nvme_tcp 00:24:26.601 rmmod nvme_fabrics 00:24:26.601 rmmod nvme_keyring 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 97734 ']' 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 97734 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97734 ']' 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97734 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97734 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.601 killing process with pid 97734 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97734' 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97734 00:24:26.601 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97734 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:24:26.860 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.861 11:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:24:27.120 00:24:27.120 real 1m1.238s 00:24:27.120 user 2m49.560s 00:24:27.120 sys 0m18.085s 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:27.120 ************************************ 00:24:27.120 END TEST nvmf_host_multipath 00:24:27.120 ************************************ 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.120 ************************************ 00:24:27.120 START TEST nvmf_timeout 00:24:27.120 ************************************ 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:27.120 * Looking for test storage... 00:24:27.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:24:27.120 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:27.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.382 --rc genhtml_branch_coverage=1 00:24:27.382 --rc genhtml_function_coverage=1 00:24:27.382 --rc genhtml_legend=1 00:24:27.382 --rc geninfo_all_blocks=1 00:24:27.382 --rc geninfo_unexecuted_blocks=1 00:24:27.382 00:24:27.382 ' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:27.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.382 --rc genhtml_branch_coverage=1 00:24:27.382 --rc genhtml_function_coverage=1 00:24:27.382 --rc genhtml_legend=1 00:24:27.382 --rc geninfo_all_blocks=1 00:24:27.382 --rc geninfo_unexecuted_blocks=1 00:24:27.382 00:24:27.382 ' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:27.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.382 --rc genhtml_branch_coverage=1 00:24:27.382 --rc genhtml_function_coverage=1 00:24:27.382 --rc genhtml_legend=1 00:24:27.382 --rc geninfo_all_blocks=1 00:24:27.382 --rc geninfo_unexecuted_blocks=1 00:24:27.382 00:24:27.382 ' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:27.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.382 --rc genhtml_branch_coverage=1 00:24:27.382 --rc genhtml_function_coverage=1 00:24:27.382 --rc genhtml_legend=1 00:24:27.382 --rc geninfo_all_blocks=1 00:24:27.382 --rc geninfo_unexecuted_blocks=1 00:24:27.382 00:24:27.382 ' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:27.382 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.383 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:27.383 Cannot find device "nvmf_init_br" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:27.383 Cannot find device "nvmf_init_br2" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:27.383 Cannot find device "nvmf_tgt_br" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:27.383 Cannot find device "nvmf_tgt_br2" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:27.383 Cannot find device "nvmf_init_br" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:27.383 Cannot find device "nvmf_init_br2" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:27.383 Cannot find device "nvmf_tgt_br" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:27.383 Cannot find device "nvmf_tgt_br2" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:27.383 Cannot find device "nvmf_br" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:27.383 Cannot find device "nvmf_init_if" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:27.383 Cannot find device "nvmf_init_if2" 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:27.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:27.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:27.383 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:27.670 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:27.670 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:27.670 00:24:27.670 --- 10.0.0.3 ping statistics --- 00:24:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.670 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:27.670 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:27.670 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:24:27.670 00:24:27.670 --- 10.0.0.4 ping statistics --- 00:24:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.670 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:27.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:27.670 00:24:27.670 --- 10.0.0.1 ping statistics --- 00:24:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.670 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:27.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:24:27.670 00:24:27.670 --- 10.0.0.2 ping statistics --- 00:24:27.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.670 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98948 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98948 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98948 ']' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.670 11:54:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:27.946 [2024-11-28 11:54:57.782412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:27.946 [2024-11-28 11:54:57.782538] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.946 [2024-11-28 11:54:57.911586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:27.946 [2024-11-28 11:54:57.928889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:27.946 [2024-11-28 11:54:57.980243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.946 [2024-11-28 11:54:57.980582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.946 [2024-11-28 11:54:57.980713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.946 [2024-11-28 11:54:57.980768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.946 [2024-11-28 11:54:57.980871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.946 [2024-11-28 11:54:57.982204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.946 [2024-11-28 11:54:57.982199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.946 [2024-11-28 11:54:58.052952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.893 11:54:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:29.152 [2024-11-28 11:54:59.046786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.152 11:54:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.411 Malloc0 00:24:29.411 11:54:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.670 11:54:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.928 11:54:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:30.187 [2024-11-28 11:55:00.058620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=99003 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 99003 /var/tmp/bdevperf.sock 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99003 ']' 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.187 11:55:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.187 [2024-11-28 11:55:00.138364] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:30.187 [2024-11-28 11:55:00.138453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99003 ] 00:24:30.187 [2024-11-28 11:55:00.265502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:30.187 [2024-11-28 11:55:00.292169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.446 [2024-11-28 11:55:00.331879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.446 [2024-11-28 11:55:00.399897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:31.014 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.014 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:31.014 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:31.272 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:31.530 NVMe0n1 00:24:31.530 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=99021 00:24:31.530 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.530 11:55:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:24:31.789 Running I/O for 10 seconds... 00:24:32.727 11:55:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:32.727 7657.00 IOPS, 29.91 MiB/s [2024-11-28T11:55:02.853Z] [2024-11-28 11:55:02.811191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.727 [2024-11-28 11:55:02.811232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.727 [2024-11-28 11:55:02.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.727 [2024-11-28 11:55:02.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.727 [2024-11-28 11:55:02.811284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88970 is same with the state(6) to be set 00:24:32.727 [2024-11-28 11:55:02.811525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.727 [2024-11-28 11:55:02.811542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.727 [2024-11-28 11:55:02.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.727 [2024-11-28 11:55:02.811791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.811987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.811994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.728 [2024-11-28 11:55:02.812472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.728 [2024-11-28 11:55:02.812480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.812989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.812998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.729 [2024-11-28 11:55:02.813166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.729 [2024-11-28 11:55:02.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.730 [2024-11-28 11:55:02.813426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.730 [2024-11-28 11:55:02.813799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.813808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda98f0 is same with the state(6) to be set 00:24:32.730 [2024-11-28 11:55:02.813818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.730 [2024-11-28 11:55:02.813825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.730 [2024-11-28 11:55:02.813832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71592 len:8 PRP1 0x0 PRP2 0x0 00:24:32.730 [2024-11-28 11:55:02.813848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.730 [2024-11-28 11:55:02.814109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:32.730 [2024-11-28 11:55:02.814132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88970 (9): Bad file descriptor 00:24:32.730 [2024-11-28 11:55:02.814211] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.730 [2024-11-28 11:55:02.814230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88970 with addr=10.0.0.3, port=4420 00:24:32.730 [2024-11-28 11:55:02.814241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88970 is same with the state(6) to be set 00:24:32.730 [2024-11-28 11:55:02.814257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88970 (9): Bad file descriptor 00:24:32.730 [2024-11-28 11:55:02.814271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:32.731 [2024-11-28 11:55:02.814279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:32.731 [2024-11-28 11:55:02.814289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:32.731 [2024-11-28 11:55:02.814313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:32.731 [2024-11-28 11:55:02.814325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:32.731 11:55:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:24:34.606 4411.00 IOPS, 17.23 MiB/s [2024-11-28T11:55:04.990Z] 2940.67 IOPS, 11.49 MiB/s [2024-11-28T11:55:04.990Z] [2024-11-28 11:55:04.814465] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.864 [2024-11-28 11:55:04.814519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88970 with addr=10.0.0.3, port=4420 00:24:34.864 [2024-11-28 11:55:04.814534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88970 is same with the state(6) to be set 00:24:34.864 [2024-11-28 11:55:04.814552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88970 (9): Bad file descriptor 00:24:34.864 [2024-11-28 11:55:04.814566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:34.864 [2024-11-28 11:55:04.814574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:34.864 [2024-11-28 11:55:04.814582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:34.864 [2024-11-28 11:55:04.814591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:34.865 [2024-11-28 11:55:04.814600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:34.865 11:55:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:24:34.865 11:55:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.865 11:55:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:35.125 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:35.125 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:24:35.125 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:35.125 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:35.382 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:35.382 11:55:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:24:36.577 2205.50 IOPS, 8.62 MiB/s [2024-11-28T11:55:06.963Z] 1764.40 IOPS, 6.89 MiB/s [2024-11-28T11:55:06.963Z] [2024-11-28 11:55:06.814809] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.837 [2024-11-28 11:55:06.814863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88970 with addr=10.0.0.3, port=4420 00:24:36.837 [2024-11-28 11:55:06.814877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88970 is same with the state(6) to be set 00:24:36.837 [2024-11-28 11:55:06.814898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88970 (9): Bad file descriptor 00:24:36.837 [2024-11-28 11:55:06.814914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:36.837 [2024-11-28 11:55:06.814922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:36.837 [2024-11-28 11:55:06.814931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:36.837 [2024-11-28 11:55:06.814940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:36.837 [2024-11-28 11:55:06.814950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:38.729 1470.33 IOPS, 5.74 MiB/s [2024-11-28T11:55:08.855Z] 1260.29 IOPS, 4.92 MiB/s [2024-11-28T11:55:08.855Z] [2024-11-28 11:55:08.815077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:38.729 [2024-11-28 11:55:08.815104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:38.730 [2024-11-28 11:55:08.815114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:38.730 [2024-11-28 11:55:08.815122] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:24:38.730 [2024-11-28 11:55:08.815131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:39.926 1102.75 IOPS, 4.31 MiB/s 00:24:39.926 Latency(us) 00:24:39.926 [2024-11-28T11:55:10.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.926 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.926 Verification LBA range: start 0x0 length 0x4000 00:24:39.926 NVMe0n1 : 8.14 1083.17 4.23 15.72 0.00 116357.80 2785.28 7015926.69 00:24:39.926 [2024-11-28T11:55:10.052Z] =================================================================================================================== 00:24:39.926 [2024-11-28T11:55:10.052Z] Total : 1083.17 4.23 15.72 0.00 116357.80 2785.28 7015926.69 00:24:39.926 { 00:24:39.926 "results": [ 00:24:39.926 { 00:24:39.926 "job": "NVMe0n1", 00:24:39.926 "core_mask": "0x4", 00:24:39.926 "workload": "verify", 00:24:39.926 "status": "finished", 00:24:39.926 "verify_range": { 00:24:39.926 "start": 0, 00:24:39.926 "length": 16384 00:24:39.926 }, 00:24:39.926 "queue_depth": 128, 00:24:39.926 "io_size": 4096, 00:24:39.926 "runtime": 8.14461, 00:24:39.926 "iops": 1083.1703421035506, 00:24:39.926 "mibps": 4.231134148841995, 00:24:39.926 "io_failed": 128, 00:24:39.926 "io_timeout": 0, 00:24:39.926 "avg_latency_us": 116357.79503910615, 00:24:39.926 "min_latency_us": 2785.28, 00:24:39.926 "max_latency_us": 7015926.69090909 00:24:39.926 } 00:24:39.926 ], 00:24:39.926 "core_count": 1 00:24:39.926 } 00:24:40.494 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:24:40.494 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.494 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:40.754 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:40.754 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:24:40.754 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:40.754 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 99021 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 99003 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99003 ']' 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99003 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99003 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:41.013 killing process with pid 99003 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99003' 00:24:41.013 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99003 00:24:41.013 Received shutdown signal, test time was about 9.276990 seconds 00:24:41.013 00:24:41.013 Latency(us) 00:24:41.013 [2024-11-28T11:55:11.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.013 [2024-11-28T11:55:11.140Z] =================================================================================================================== 00:24:41.014 [2024-11-28T11:55:11.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.014 11:55:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99003 00:24:41.272 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:41.532 [2024-11-28 11:55:11.460529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=99147 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 99147 /var/tmp/bdevperf.sock 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99147 ']' 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.532 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.532 [2024-11-28 11:55:11.529369] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:41.532 [2024-11-28 11:55:11.529444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99147 ] 00:24:41.532 [2024-11-28 11:55:11.648221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:41.791 [2024-11-28 11:55:11.675156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.791 [2024-11-28 11:55:11.716029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.791 [2024-11-28 11:55:11.783597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:41.791 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.791 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:41.791 11:55:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:42.050 11:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:42.618 NVMe0n1 00:24:42.618 11:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=99163 00:24:42.618 11:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.618 11:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:24:42.618 Running I/O for 10 seconds... 00:24:43.555 11:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:43.817 7791.00 IOPS, 30.43 MiB/s [2024-11-28T11:55:13.943Z] [2024-11-28 11:55:13.741158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.817 [2024-11-28 11:55:13.741197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.817 [2024-11-28 11:55:13.741215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.817 [2024-11-28 11:55:13.741224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.817 [2024-11-28 11:55:13.741234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.817 [2024-11-28 11:55:13.741243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.817 [2024-11-28 11:55:13.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.817 [2024-11-28 11:55:13.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.817 [2024-11-28 11:55:13.741269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.817 [2024-11-28 11:55:13.741277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.817 [2024-11-28 11:55:13.741286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.817 [2024-11-28 11:55:13.741315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.741988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.741997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.742004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.742013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.742020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.742029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.818 [2024-11-28 11:55:13.742036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.818 [2024-11-28 11:55:13.742045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.819 [2024-11-28 11:55:13.742441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.819 [2024-11-28 11:55:13.742755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-11-28 11:55:13.742765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.742984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.742993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.820 [2024-11-28 11:55:13.743458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.820 [2024-11-28 11:55:13.743467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.821 [2024-11-28 11:55:13.743476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.821 [2024-11-28 11:55:13.743485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.821 [2024-11-28 11:55:13.743492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.821 [2024-11-28 11:55:13.743501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14978f0 is same with the state(6) to be set 00:24:43.821 [2024-11-28 11:55:13.743517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.821 [2024-11-28 11:55:13.743524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.821 [2024-11-28 11:55:13.743531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73696 len:8 PRP1 0x0 PRP2 0x0 00:24:43.821 [2024-11-28 11:55:13.743538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.821 [2024-11-28 11:55:13.743775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:43.821 [2024-11-28 11:55:13.743837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:43.821 [2024-11-28 11:55:13.743905] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.821 [2024-11-28 11:55:13.743924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:43.821 [2024-11-28 11:55:13.743933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:43.821 [2024-11-28 11:55:13.743948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:43.821 [2024-11-28 11:55:13.743962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:43.821 [2024-11-28 11:55:13.743970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:43.821 [2024-11-28 11:55:13.743980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:43.821 [2024-11-28 11:55:13.743989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:43.821 [2024-11-28 11:55:13.743998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:43.821 11:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:24:44.758 4542.50 IOPS, 17.74 MiB/s [2024-11-28T11:55:14.884Z] [2024-11-28 11:55:14.744065] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.758 [2024-11-28 11:55:14.744102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:44.758 [2024-11-28 11:55:14.744113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:44.758 [2024-11-28 11:55:14.744130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:44.758 [2024-11-28 11:55:14.744144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:44.758 [2024-11-28 11:55:14.744151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:44.758 [2024-11-28 11:55:14.744159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:44.758 [2024-11-28 11:55:14.744167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:44.758 [2024-11-28 11:55:14.744176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:44.758 11:55:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:45.017 [2024-11-28 11:55:14.996125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:45.017 11:55:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 99163 00:24:45.841 3028.33 IOPS, 11.83 MiB/s [2024-11-28T11:55:15.967Z] [2024-11-28 11:55:15.760934] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:47.716 2271.25 IOPS, 8.87 MiB/s [2024-11-28T11:55:18.776Z] 3415.20 IOPS, 13.34 MiB/s [2024-11-28T11:55:19.714Z] 4499.33 IOPS, 17.58 MiB/s [2024-11-28T11:55:20.649Z] 5274.86 IOPS, 20.60 MiB/s [2024-11-28T11:55:21.585Z] 5858.50 IOPS, 22.88 MiB/s [2024-11-28T11:55:22.975Z] 6307.11 IOPS, 24.64 MiB/s [2024-11-28T11:55:22.975Z] 6656.40 IOPS, 26.00 MiB/s 00:24:52.849 Latency(us) 00:24:52.849 [2024-11-28T11:55:22.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.849 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:52.849 Verification LBA range: start 0x0 length 0x4000 00:24:52.849 NVMe0n1 : 10.01 6660.61 26.02 0.00 0.00 19190.15 1817.13 3019898.88 00:24:52.849 [2024-11-28T11:55:22.975Z] =================================================================================================================== 00:24:52.849 [2024-11-28T11:55:22.975Z] Total : 6660.61 26.02 0.00 0.00 19190.15 1817.13 3019898.88 00:24:52.849 { 00:24:52.849 "results": [ 00:24:52.849 { 00:24:52.849 "job": "NVMe0n1", 00:24:52.849 "core_mask": "0x4", 00:24:52.849 "workload": "verify", 00:24:52.849 "status": "finished", 00:24:52.849 "verify_range": { 00:24:52.849 "start": 0, 00:24:52.849 "length": 16384 00:24:52.849 }, 00:24:52.849 "queue_depth": 128, 00:24:52.849 "io_size": 4096, 00:24:52.849 "runtime": 10.008085, 00:24:52.849 "iops": 6660.614892859124, 00:24:52.849 "mibps": 26.018026925230952, 00:24:52.849 "io_failed": 0, 00:24:52.849 "io_timeout": 0, 00:24:52.849 "avg_latency_us": 19190.14979199738, 00:24:52.849 "min_latency_us": 1817.1345454545456, 00:24:52.849 "max_latency_us": 3019898.88 00:24:52.849 } 00:24:52.849 ], 00:24:52.849 "core_count": 1 00:24:52.849 } 00:24:52.849 11:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=99268 00:24:52.849 11:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.849 11:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:52.849 Running I/O for 10 seconds... 00:24:53.794 11:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:53.794 9410.00 IOPS, 36.76 MiB/s [2024-11-28T11:55:23.920Z] [2024-11-28 11:55:23.878824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.878886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.794 [2024-11-28 11:55:23.878940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.794 [2024-11-28 11:55:23.878961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.878969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.794 [2024-11-28 11:55:23.878979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.794 [2024-11-28 11:55:23.878998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.879005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.794 [2024-11-28 11:55:23.879015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.794 [2024-11-28 11:55:23.879022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.795 [2024-11-28 11:55:23.879597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.795 [2024-11-28 11:55:23.879858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.795 [2024-11-28 11:55:23.879868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.879987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.879997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.796 [2024-11-28 11:55:23.880630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.796 [2024-11-28 11:55:23.880671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.796 [2024-11-28 11:55:23.880682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.797 [2024-11-28 11:55:23.880796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.880985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.797 [2024-11-28 11:55:23.881286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495a00 is same with the state(6) to be set 00:24:53.797 [2024-11-28 11:55:23.881324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.797 [2024-11-28 11:55:23.881332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.797 [2024-11-28 11:55:23.881341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87176 len:8 PRP1 0x0 PRP2 0x0 00:24:53.797 [2024-11-28 11:55:23.881350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.797 [2024-11-28 11:55:23.881368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.797 [2024-11-28 11:55:23.881376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87440 len:8 PRP1 0x0 PRP2 0x0 00:24:53.797 [2024-11-28 11:55:23.881385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.797 [2024-11-28 11:55:23.881400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.797 [2024-11-28 11:55:23.881440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87448 len:8 PRP1 0x0 PRP2 0x0 00:24:53.797 [2024-11-28 11:55:23.881451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.797 [2024-11-28 11:55:23.881473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.797 [2024-11-28 11:55:23.881481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87456 len:8 PRP1 0x0 PRP2 0x0 00:24:53.797 [2024-11-28 11:55:23.881490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.797 [2024-11-28 11:55:23.881499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.797 [2024-11-28 11:55:23.881507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.797 [2024-11-28 11:55:23.881514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87464 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87472 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87480 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87488 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87496 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87504 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87512 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87520 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87528 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87536 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87544 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87552 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.881913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.798 [2024-11-28 11:55:23.881920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.798 [2024-11-28 11:55:23.881927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87560 len:8 PRP1 0x0 PRP2 0x0 00:24:53.798 [2024-11-28 11:55:23.881935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.882096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.798 [2024-11-28 11:55:23.882112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.882123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.798 [2024-11-28 11:55:23.882131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.882140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.798 [2024-11-28 11:55:23.882148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.882162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.798 [2024-11-28 11:55:23.882171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.798 [2024-11-28 11:55:23.882179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:53.798 [2024-11-28 11:55:23.882407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:53.798 [2024-11-28 11:55:23.882446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:53.798 [2024-11-28 11:55:23.882615] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.798 [2024-11-28 11:55:23.882641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:53.798 [2024-11-28 11:55:23.882653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:53.798 [2024-11-28 11:55:23.882675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:53.798 [2024-11-28 11:55:23.882692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:53.798 [2024-11-28 11:55:23.882702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:53.798 [2024-11-28 11:55:23.882713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:53.798 [2024-11-28 11:55:23.882724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:53.798 [2024-11-28 11:55:23.882735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:53.798 11:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:54.992 5409.00 IOPS, 21.13 MiB/s [2024-11-28T11:55:25.118Z] [2024-11-28 11:55:24.882825] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.992 [2024-11-28 11:55:24.882915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:54.992 [2024-11-28 11:55:24.882943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:54.992 [2024-11-28 11:55:24.882961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:54.992 [2024-11-28 11:55:24.882977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:54.992 [2024-11-28 11:55:24.882985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:54.992 [2024-11-28 11:55:24.882994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:54.992 [2024-11-28 11:55:24.883003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:54.992 [2024-11-28 11:55:24.883012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:55.928 3606.00 IOPS, 14.09 MiB/s [2024-11-28T11:55:26.054Z] [2024-11-28 11:55:25.883088] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.928 [2024-11-28 11:55:25.883158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:55.928 [2024-11-28 11:55:25.883171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:55.928 [2024-11-28 11:55:25.883189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:55.928 [2024-11-28 11:55:25.883205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:55.928 [2024-11-28 11:55:25.883214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:55.928 [2024-11-28 11:55:25.883222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:55.928 [2024-11-28 11:55:25.883231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:55.928 [2024-11-28 11:55:25.883240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:56.865 2704.50 IOPS, 10.56 MiB/s [2024-11-28T11:55:26.991Z] [2024-11-28 11:55:26.886095] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.865 [2024-11-28 11:55:26.886165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476970 with addr=10.0.0.3, port=4420 00:24:56.865 [2024-11-28 11:55:26.886177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1476970 is same with the state(6) to be set 00:24:56.865 [2024-11-28 11:55:26.886432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1476970 (9): Bad file descriptor 00:24:56.865 [2024-11-28 11:55:26.886705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:56.865 [2024-11-28 11:55:26.886718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:56.865 [2024-11-28 11:55:26.886728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:56.865 [2024-11-28 11:55:26.886738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:56.865 [2024-11-28 11:55:26.886748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:56.865 11:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:57.124 [2024-11-28 11:55:27.169005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:57.124 11:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 99268 00:24:57.951 2163.60 IOPS, 8.45 MiB/s [2024-11-28T11:55:28.077Z] [2024-11-28 11:55:27.918430] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:59.825 3122.67 IOPS, 12.20 MiB/s [2024-11-28T11:55:30.887Z] 4077.71 IOPS, 15.93 MiB/s [2024-11-28T11:55:31.825Z] 4794.50 IOPS, 18.73 MiB/s [2024-11-28T11:55:32.763Z] 5349.33 IOPS, 20.90 MiB/s [2024-11-28T11:55:32.763Z] 5798.40 IOPS, 22.65 MiB/s 00:25:02.637 Latency(us) 00:25:02.637 [2024-11-28T11:55:32.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.637 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.637 Verification LBA range: start 0x0 length 0x4000 00:25:02.637 NVMe0n1 : 10.01 5805.25 22.68 4343.55 0.00 12588.12 714.94 3019898.88 00:25:02.637 [2024-11-28T11:55:32.763Z] =================================================================================================================== 00:25:02.637 [2024-11-28T11:55:32.763Z] Total : 5805.25 22.68 4343.55 0.00 12588.12 0.00 3019898.88 00:25:02.637 { 00:25:02.637 "results": [ 00:25:02.637 { 00:25:02.637 "job": "NVMe0n1", 00:25:02.637 "core_mask": "0x4", 00:25:02.637 "workload": "verify", 00:25:02.637 "status": "finished", 00:25:02.637 "verify_range": { 00:25:02.637 "start": 0, 00:25:02.637 "length": 16384 00:25:02.637 }, 00:25:02.637 "queue_depth": 128, 00:25:02.637 "io_size": 4096, 00:25:02.637 "runtime": 10.011626, 00:25:02.637 "iops": 5805.250815402013, 00:25:02.637 "mibps": 22.676760997664115, 00:25:02.637 "io_failed": 43486, 00:25:02.637 "io_timeout": 0, 00:25:02.637 "avg_latency_us": 12588.116878137118, 00:25:02.637 "min_latency_us": 714.9381818181819, 00:25:02.637 "max_latency_us": 3019898.88 00:25:02.637 } 00:25:02.637 ], 00:25:02.637 "core_count": 1 00:25:02.637 } 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 99147 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99147 ']' 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99147 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99147 00:25:02.897 killing process with pid 99147 00:25:02.897 Received shutdown signal, test time was about 10.000000 seconds 00:25:02.897 00:25:02.897 Latency(us) 00:25:02.897 [2024-11-28T11:55:33.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.897 [2024-11-28T11:55:33.023Z] =================================================================================================================== 00:25:02.897 [2024-11-28T11:55:33.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99147' 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99147 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99147 00:25:02.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=99377 00:25:02.897 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 99377 /var/tmp/bdevperf.sock 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99377 ']' 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.898 11:55:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:03.192 [2024-11-28 11:55:33.041572] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:03.192 [2024-11-28 11:55:33.042558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99377 ] 00:25:03.192 [2024-11-28 11:55:33.174625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:03.192 [2024-11-28 11:55:33.199308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.192 [2024-11-28 11:55:33.235181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.192 [2024-11-28 11:55:33.291733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.130 11:55:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.130 11:55:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:04.130 11:55:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=99393 00:25:04.130 11:55:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99377 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:04.130 11:55:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:04.389 11:55:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:04.648 NVMe0n1 00:25:04.648 11:55:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=99433 00:25:04.648 11:55:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:04.648 11:55:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:25:04.648 Running I/O for 10 seconds... 00:25:05.585 11:55:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:05.848 16677.00 IOPS, 65.14 MiB/s [2024-11-28T11:55:35.974Z] [2024-11-28 11:55:35.863958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.848 [2024-11-28 11:55:35.864445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34de0 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.849 [2024-11-28 11:55:35.864821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.849 [2024-11-28 11:55:35.864841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.849 [2024-11-28 11:55:35.864857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.849 [2024-11-28 11:55:35.864873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3970 is same with the state(6) to be set 00:25:05.849 [2024-11-28 11:55:35.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.864944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.864968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.864986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.849 [2024-11-28 11:55:35.865118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-28 11:55:35.865125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.850 [2024-11-28 11:55:35.865664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.850 [2024-11-28 11:55:35.865673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.865989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.851 [2024-11-28 11:55:35.866199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.851 [2024-11-28 11:55:35.866207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.852 [2024-11-28 11:55:35.866829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.852 [2024-11-28 11:55:35.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.866985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.866992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.853 [2024-11-28 11:55:35.867285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc148f0 is same with the state(6) to be set 00:25:05.853 [2024-11-28 11:55:35.867316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.853 [2024-11-28 11:55:35.867324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.853 [2024-11-28 11:55:35.867338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23368 len:8 PRP1 0x0 PRP2 0x0 00:25:05.853 [2024-11-28 11:55:35.867346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.853 [2024-11-28 11:55:35.867620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:05.853 [2024-11-28 11:55:35.867728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3970 (9): Bad file descriptor 00:25:05.853 [2024-11-28 11:55:35.868040] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.853 [2024-11-28 11:55:35.868186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf3970 with addr=10.0.0.3, port=4420 00:25:05.853 [2024-11-28 11:55:35.868270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3970 is same with the state(6) to be set 00:25:05.853 [2024-11-28 11:55:35.868344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3970 (9): Bad file descriptor 00:25:05.853 [2024-11-28 11:55:35.868505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:05.853 [2024-11-28 11:55:35.868555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:05.853 [2024-11-28 11:55:35.868601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:05.853 [2024-11-28 11:55:35.868633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:05.853 [2024-11-28 11:55:35.868871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:05.854 11:55:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 99433 00:25:07.730 9524.50 IOPS, 37.21 MiB/s [2024-11-28T11:55:38.114Z] 6349.67 IOPS, 24.80 MiB/s [2024-11-28T11:55:38.114Z] [2024-11-28 11:55:37.869123] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.988 [2024-11-28 11:55:37.869319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf3970 with addr=10.0.0.3, port=4420 00:25:07.988 [2024-11-28 11:55:37.869485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3970 is same with the state(6) to be set 00:25:07.988 [2024-11-28 11:55:37.869625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3970 (9): Bad file descriptor 00:25:07.988 [2024-11-28 11:55:37.869697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:07.988 [2024-11-28 11:55:37.869836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:07.988 [2024-11-28 11:55:37.869889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:07.988 [2024-11-28 11:55:37.869987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:07.988 [2024-11-28 11:55:37.870049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:09.864 4762.25 IOPS, 18.60 MiB/s [2024-11-28T11:55:39.990Z] 3809.80 IOPS, 14.88 MiB/s [2024-11-28T11:55:39.990Z] [2024-11-28 11:55:39.870184] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.864 [2024-11-28 11:55:39.870392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf3970 with addr=10.0.0.3, port=4420 00:25:09.864 [2024-11-28 11:55:39.870560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3970 is same with the state(6) to be set 00:25:09.864 [2024-11-28 11:55:39.870634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3970 (9): Bad file descriptor 00:25:09.864 [2024-11-28 11:55:39.870912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:09.864 [2024-11-28 11:55:39.870965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:09.864 [2024-11-28 11:55:39.871011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:09.864 [2024-11-28 11:55:39.871042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:09.864 [2024-11-28 11:55:39.871089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:11.737 3174.83 IOPS, 12.40 MiB/s [2024-11-28T11:55:42.121Z] 2721.29 IOPS, 10.63 MiB/s [2024-11-28T11:55:42.121Z] [2024-11-28 11:55:41.871306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:11.995 [2024-11-28 11:55:41.871484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:11.995 [2024-11-28 11:55:41.871643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:11.995 [2024-11-28 11:55:41.871698] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:25:11.995 [2024-11-28 11:55:41.871827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:12.932 2381.12 IOPS, 9.30 MiB/s 00:25:12.932 Latency(us) 00:25:12.932 [2024-11-28T11:55:43.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.932 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:12.932 NVMe0n1 : 8.16 2335.73 9.12 15.69 0.00 54377.94 1228.80 7015926.69 00:25:12.932 [2024-11-28T11:55:43.058Z] =================================================================================================================== 00:25:12.932 [2024-11-28T11:55:43.058Z] Total : 2335.73 9.12 15.69 0.00 54377.94 1228.80 7015926.69 00:25:12.932 { 00:25:12.932 "results": [ 00:25:12.932 { 00:25:12.932 "job": "NVMe0n1", 00:25:12.932 "core_mask": "0x4", 00:25:12.932 "workload": "randread", 00:25:12.932 "status": "finished", 00:25:12.932 "queue_depth": 128, 00:25:12.932 "io_size": 4096, 00:25:12.932 "runtime": 8.155465, 00:25:12.932 "iops": 2335.7343817918413, 00:25:12.932 "mibps": 9.12396242887438, 00:25:12.932 "io_failed": 128, 00:25:12.932 "io_timeout": 0, 00:25:12.932 "avg_latency_us": 54377.94373600952, 00:25:12.932 "min_latency_us": 1228.8, 00:25:12.932 "max_latency_us": 7015926.69090909 00:25:12.932 } 00:25:12.932 ], 00:25:12.932 "core_count": 1 00:25:12.932 } 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.932 Attaching 5 probes... 00:25:12.932 1356.992324: reset bdev controller NVMe0 00:25:12.932 1357.358379: reconnect bdev controller NVMe0 00:25:12.932 3358.446237: reconnect delay bdev controller NVMe0 00:25:12.932 3358.459098: reconnect bdev controller NVMe0 00:25:12.932 5359.509753: reconnect delay bdev controller NVMe0 00:25:12.932 5359.522649: reconnect bdev controller NVMe0 00:25:12.932 7360.667619: reconnect delay bdev controller NVMe0 00:25:12.932 7360.680711: reconnect bdev controller NVMe0 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 99393 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 99377 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99377 ']' 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99377 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99377 00:25:12.932 killing process with pid 99377 00:25:12.932 Received shutdown signal, test time was about 8.227131 seconds 00:25:12.932 00:25:12.932 Latency(us) 00:25:12.932 [2024-11-28T11:55:43.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.932 [2024-11-28T11:55:43.058Z] =================================================================================================================== 00:25:12.932 [2024-11-28T11:55:43.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99377' 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99377 00:25:12.932 11:55:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99377 00:25:13.192 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.451 rmmod nvme_tcp 00:25:13.451 rmmod nvme_fabrics 00:25:13.451 rmmod nvme_keyring 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98948 ']' 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98948 00:25:13.451 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98948 ']' 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98948 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98948 00:25:13.452 killing process with pid 98948 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98948' 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98948 00:25:13.452 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98948 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:13.711 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:13.970 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:13.970 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:25:13.971 ************************************ 00:25:13.971 END TEST nvmf_timeout 00:25:13.971 ************************************ 00:25:13.971 00:25:13.971 real 0m46.918s 00:25:13.971 user 2m16.640s 00:25:13.971 sys 0m5.973s 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.971 11:55:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.971 11:55:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:25:13.971 11:55:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:13.971 ************************************ 00:25:13.971 END TEST nvmf_host 00:25:13.971 ************************************ 00:25:13.971 00:25:13.971 real 5m45.375s 00:25:13.971 user 16m7.431s 00:25:13.971 sys 1m19.016s 00:25:13.971 11:55:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.971 11:55:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.971 11:55:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:13.971 11:55:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:25:13.971 ************************************ 00:25:13.971 END TEST nvmf_tcp 00:25:13.971 ************************************ 00:25:13.971 00:25:13.971 real 15m33.133s 00:25:13.971 user 40m43.567s 00:25:13.971 sys 4m7.347s 00:25:13.971 11:55:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.971 11:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.230 11:55:44 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:25:14.230 11:55:44 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:14.230 11:55:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:14.230 11:55:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.230 11:55:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.230 ************************************ 00:25:14.230 START TEST nvmf_dif 00:25:14.230 ************************************ 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:14.230 * Looking for test storage... 00:25:14.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.230 11:55:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.230 --rc genhtml_branch_coverage=1 00:25:14.230 --rc genhtml_function_coverage=1 00:25:14.230 --rc genhtml_legend=1 00:25:14.230 --rc geninfo_all_blocks=1 00:25:14.230 --rc geninfo_unexecuted_blocks=1 00:25:14.230 00:25:14.230 ' 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.230 --rc genhtml_branch_coverage=1 00:25:14.230 --rc genhtml_function_coverage=1 00:25:14.230 --rc genhtml_legend=1 00:25:14.230 --rc geninfo_all_blocks=1 00:25:14.230 --rc geninfo_unexecuted_blocks=1 00:25:14.230 00:25:14.230 ' 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.230 --rc genhtml_branch_coverage=1 00:25:14.230 --rc genhtml_function_coverage=1 00:25:14.230 --rc genhtml_legend=1 00:25:14.230 --rc geninfo_all_blocks=1 00:25:14.230 --rc geninfo_unexecuted_blocks=1 00:25:14.230 00:25:14.230 ' 00:25:14.230 11:55:44 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.230 --rc genhtml_branch_coverage=1 00:25:14.230 --rc genhtml_function_coverage=1 00:25:14.230 --rc genhtml_legend=1 00:25:14.230 --rc geninfo_all_blocks=1 00:25:14.230 --rc geninfo_unexecuted_blocks=1 00:25:14.230 00:25:14.230 ' 00:25:14.230 11:55:44 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.230 11:55:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.231 11:55:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.490 11:55:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.490 11:55:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.490 11:55:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.490 11:55:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.490 11:55:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.490 11:55:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.490 11:55:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.490 11:55:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:14.490 11:55:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.490 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.490 11:55:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.490 11:55:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:14.490 11:55:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:14.490 11:55:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:14.491 11:55:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:14.491 11:55:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.491 11:55:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:14.491 11:55:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:14.491 Cannot find device "nvmf_init_br" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@162 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:14.491 Cannot find device "nvmf_init_br2" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@163 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:14.491 Cannot find device "nvmf_tgt_br" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@164 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:14.491 Cannot find device "nvmf_tgt_br2" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@165 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:14.491 Cannot find device "nvmf_init_br" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@166 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:14.491 Cannot find device "nvmf_init_br2" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@167 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:14.491 Cannot find device "nvmf_tgt_br" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@168 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:14.491 Cannot find device "nvmf_tgt_br2" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@169 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:14.491 Cannot find device "nvmf_br" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@170 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:14.491 Cannot find device "nvmf_init_if" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@171 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:14.491 Cannot find device "nvmf_init_if2" 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@172 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:14.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@173 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:14.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@174 -- # true 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:14.491 11:55:44 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:14.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:25:14.751 00:25:14.751 --- 10.0.0.3 ping statistics --- 00:25:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.751 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:14.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:14.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:25:14.751 00:25:14.751 --- 10.0.0.4 ping statistics --- 00:25:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.751 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:14.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:14.751 00:25:14.751 --- 10.0.0.1 ping statistics --- 00:25:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.751 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:14.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:14.751 00:25:14.751 --- 10.0.0.2 ping statistics --- 00:25:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.751 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:14.751 11:55:44 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:15.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:15.319 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:15.319 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.319 11:55:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:15.319 11:55:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=99925 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:15.319 11:55:45 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 99925 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 99925 ']' 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.319 11:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.319 [2024-11-28 11:55:45.331628] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:15.319 [2024-11-28 11:55:45.331726] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.578 [2024-11-28 11:55:45.458922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:15.578 [2024-11-28 11:55:45.491203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.578 [2024-11-28 11:55:45.536457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.578 [2024-11-28 11:55:45.536539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.578 [2024-11-28 11:55:45.536554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.578 [2024-11-28 11:55:45.536566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.578 [2024-11-28 11:55:45.536575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.578 [2024-11-28 11:55:45.537070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.578 [2024-11-28 11:55:45.618690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:15.578 11:55:45 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.578 11:55:45 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:25:15.578 11:55:45 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.578 11:55:45 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.578 11:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:55:45 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.837 11:55:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:15.837 11:55:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 [2024-11-28 11:55:45.746895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.837 11:55:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 ************************************ 00:25:15.837 START TEST fio_dif_1_default 00:25:15.837 ************************************ 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 bdev_null0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.837 [2024-11-28 11:55:45.795124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.837 { 00:25:15.837 "params": { 00:25:15.837 "name": "Nvme$subsystem", 00:25:15.837 "trtype": "$TEST_TRANSPORT", 00:25:15.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.837 "adrfam": "ipv4", 00:25:15.837 "trsvcid": "$NVMF_PORT", 00:25:15.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.837 "hdgst": ${hdgst:-false}, 00:25:15.837 "ddgst": ${ddgst:-false} 00:25:15.837 }, 00:25:15.837 "method": "bdev_nvme_attach_controller" 00:25:15.837 } 00:25:15.837 EOF 00:25:15.837 )") 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:15.837 "params": { 00:25:15.837 "name": "Nvme0", 00:25:15.837 "trtype": "tcp", 00:25:15.837 "traddr": "10.0.0.3", 00:25:15.837 "adrfam": "ipv4", 00:25:15.837 "trsvcid": "4420", 00:25:15.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.837 "hdgst": false, 00:25:15.837 "ddgst": false 00:25:15.837 }, 00:25:15.837 "method": "bdev_nvme_attach_controller" 00:25:15.837 }' 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:15.837 11:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:16.096 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:16.096 fio-3.35 00:25:16.096 Starting 1 thread 00:25:28.388 00:25:28.388 filename0: (groupid=0, jobs=1): err= 0: pid=99984: Thu Nov 28 11:55:56 2024 00:25:28.388 read: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(410MiB/10001msec) 00:25:28.388 slat (nsec): min=5917, max=55366, avg=7294.12, stdev=2645.13 00:25:28.388 clat (usec): min=327, max=11980, avg=359.48, stdev=79.63 00:25:28.388 lat (usec): min=333, max=11989, avg=366.77, stdev=79.95 00:25:28.388 clat percentiles (usec): 00:25:28.388 | 1.00th=[ 334], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:25:28.388 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:25:28.388 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 396], 00:25:28.388 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 490], 99.95th=[ 519], 00:25:28.388 | 99.99th=[ 3326] 00:25:28.388 bw ( KiB/s): min=39904, max=42688, per=100.00%, avg=42022.74, stdev=783.85, samples=19 00:25:28.388 iops : min= 9976, max=10672, avg=10505.68, stdev=195.96, samples=19 00:25:28.388 lat (usec) : 500=99.92%, 750=0.06%, 1000=0.01% 00:25:28.388 lat (msec) : 2=0.01%, 4=0.01%, 20=0.01% 00:25:28.388 cpu : usr=82.20%, sys=15.88%, ctx=33, majf=0, minf=9 00:25:28.388 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:28.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.388 issued rwts: total=104976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.388 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:28.388 00:25:28.388 Run status group 0 (all jobs): 00:25:28.388 READ: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=410MiB (430MB), run=10001-10001msec 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 ************************************ 00:25:28.388 END TEST fio_dif_1_default 00:25:28.388 ************************************ 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.388 00:25:28.388 real 0m10.995s 00:25:28.388 user 0m8.844s 00:25:28.388 sys 0m1.858s 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 11:55:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:28.388 11:55:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:28.388 11:55:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 ************************************ 00:25:28.388 START TEST fio_dif_1_multi_subsystems 00:25:28.388 ************************************ 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 bdev_null0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.388 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.389 [2024-11-28 11:55:56.838724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.389 bdev_null1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.389 { 00:25:28.389 "params": { 00:25:28.389 "name": "Nvme$subsystem", 00:25:28.389 "trtype": "$TEST_TRANSPORT", 00:25:28.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.389 "adrfam": "ipv4", 00:25:28.389 "trsvcid": "$NVMF_PORT", 00:25:28.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.389 "hdgst": ${hdgst:-false}, 00:25:28.389 "ddgst": ${ddgst:-false} 00:25:28.389 }, 00:25:28.389 "method": "bdev_nvme_attach_controller" 00:25:28.389 } 00:25:28.389 EOF 00:25:28.389 )") 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.389 { 00:25:28.389 "params": { 00:25:28.389 "name": "Nvme$subsystem", 00:25:28.389 "trtype": "$TEST_TRANSPORT", 00:25:28.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.389 "adrfam": "ipv4", 00:25:28.389 "trsvcid": "$NVMF_PORT", 00:25:28.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.389 "hdgst": ${hdgst:-false}, 00:25:28.389 "ddgst": ${ddgst:-false} 00:25:28.389 }, 00:25:28.389 "method": "bdev_nvme_attach_controller" 00:25:28.389 } 00:25:28.389 EOF 00:25:28.389 )") 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:28.389 "params": { 00:25:28.389 "name": "Nvme0", 00:25:28.389 "trtype": "tcp", 00:25:28.389 "traddr": "10.0.0.3", 00:25:28.389 "adrfam": "ipv4", 00:25:28.389 "trsvcid": "4420", 00:25:28.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:28.389 "hdgst": false, 00:25:28.389 "ddgst": false 00:25:28.389 }, 00:25:28.389 "method": "bdev_nvme_attach_controller" 00:25:28.389 },{ 00:25:28.389 "params": { 00:25:28.389 "name": "Nvme1", 00:25:28.389 "trtype": "tcp", 00:25:28.389 "traddr": "10.0.0.3", 00:25:28.389 "adrfam": "ipv4", 00:25:28.389 "trsvcid": "4420", 00:25:28.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.389 "hdgst": false, 00:25:28.389 "ddgst": false 00:25:28.389 }, 00:25:28.389 "method": "bdev_nvme_attach_controller" 00:25:28.389 }' 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:28.389 11:55:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.389 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:28.389 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:28.389 fio-3.35 00:25:28.389 Starting 2 threads 00:25:38.366 00:25:38.366 filename0: (groupid=0, jobs=1): err= 0: pid=100146: Thu Nov 28 11:56:07 2024 00:25:38.366 read: IOPS=5748, BW=22.5MiB/s (23.5MB/s)(225MiB/10001msec) 00:25:38.366 slat (usec): min=5, max=405, avg=12.25, stdev= 5.84 00:25:38.366 clat (usec): min=340, max=4922, avg=662.00, stdev=48.07 00:25:38.366 lat (usec): min=346, max=4947, avg=674.25, stdev=48.51 00:25:38.366 clat percentiles (usec): 00:25:38.366 | 1.00th=[ 603], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 635], 00:25:38.366 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 660], 60.00th=[ 668], 00:25:38.366 | 70.00th=[ 676], 80.00th=[ 685], 90.00th=[ 701], 95.00th=[ 709], 00:25:38.366 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 898], 99.95th=[ 955], 00:25:38.366 | 99.99th=[ 1336] 00:25:38.366 bw ( KiB/s): min=22848, max=23168, per=50.02%, avg=23014.74, stdev=88.97, samples=19 00:25:38.366 iops : min= 5712, max= 5792, avg=5753.68, stdev=22.24, samples=19 00:25:38.366 lat (usec) : 500=0.13%, 750=98.66%, 1000=1.19% 00:25:38.366 lat (msec) : 2=0.02%, 10=0.01% 00:25:38.366 cpu : usr=88.73%, sys=9.60%, ctx=100, majf=0, minf=0 00:25:38.366 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.366 issued rwts: total=57492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.366 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:38.366 filename1: (groupid=0, jobs=1): err= 0: pid=100147: Thu Nov 28 11:56:07 2024 00:25:38.366 read: IOPS=5752, BW=22.5MiB/s (23.6MB/s)(225MiB/10001msec) 00:25:38.366 slat (nsec): min=5984, max=88290, avg=11851.20, stdev=4321.55 00:25:38.366 clat (usec): min=349, max=6260, avg=663.41, stdev=58.49 00:25:38.366 lat (usec): min=357, max=6278, avg=675.27, stdev=59.06 00:25:38.366 clat percentiles (usec): 00:25:38.366 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 635], 00:25:38.366 | 30.00th=[ 652], 40.00th=[ 660], 50.00th=[ 668], 60.00th=[ 668], 00:25:38.366 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 701], 95.00th=[ 717], 00:25:38.366 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 857], 00:25:38.366 | 99.99th=[ 930] 00:25:38.366 bw ( KiB/s): min=22880, max=23168, per=50.06%, avg=23029.89, stdev=86.69, samples=19 00:25:38.366 iops : min= 5720, max= 5792, avg=5757.47, stdev=21.67, samples=19 00:25:38.366 lat (usec) : 500=0.09%, 750=98.92%, 1000=0.98% 00:25:38.366 lat (msec) : 10=0.01% 00:25:38.366 cpu : usr=88.80%, sys=9.81%, ctx=21, majf=0, minf=0 00:25:38.366 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.366 issued rwts: total=57532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.366 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:38.366 00:25:38.366 Run status group 0 (all jobs): 00:25:38.366 READ: bw=44.9MiB/s (47.1MB/s), 22.5MiB/s-22.5MiB/s (23.5MB/s-23.6MB/s), io=449MiB (471MB), run=10001-10001msec 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 ************************************ 00:25:38.366 END TEST fio_dif_1_multi_subsystems 00:25:38.366 ************************************ 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.366 00:25:38.366 real 0m11.129s 00:25:38.366 user 0m18.519s 00:25:38.366 sys 0m2.220s 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 11:56:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:38.366 11:56:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:38.366 11:56:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.366 11:56:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:38.366 ************************************ 00:25:38.366 START TEST fio_dif_rand_params 00:25:38.366 ************************************ 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:38.366 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 11:56:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 bdev_null0 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 [2024-11-28 11:56:08.040267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.367 { 00:25:38.367 "params": { 00:25:38.367 "name": "Nvme$subsystem", 00:25:38.367 "trtype": "$TEST_TRANSPORT", 00:25:38.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.367 "adrfam": "ipv4", 00:25:38.367 "trsvcid": "$NVMF_PORT", 00:25:38.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.367 "hdgst": ${hdgst:-false}, 00:25:38.367 "ddgst": ${ddgst:-false} 00:25:38.367 }, 00:25:38.367 "method": "bdev_nvme_attach_controller" 00:25:38.367 } 00:25:38.367 EOF 00:25:38.367 )") 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:38.367 "params": { 00:25:38.367 "name": "Nvme0", 00:25:38.367 "trtype": "tcp", 00:25:38.367 "traddr": "10.0.0.3", 00:25:38.367 "adrfam": "ipv4", 00:25:38.367 "trsvcid": "4420", 00:25:38.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.367 "hdgst": false, 00:25:38.367 "ddgst": false 00:25:38.367 }, 00:25:38.367 "method": "bdev_nvme_attach_controller" 00:25:38.367 }' 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:38.367 11:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.367 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:38.367 ... 00:25:38.367 fio-3.35 00:25:38.367 Starting 3 threads 00:25:44.932 00:25:44.932 filename0: (groupid=0, jobs=1): err= 0: pid=100305: Thu Nov 28 11:56:13 2024 00:25:44.932 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(187MiB/5004msec) 00:25:44.932 slat (nsec): min=6316, max=82620, avg=16549.58, stdev=8479.08 00:25:44.932 clat (usec): min=6531, max=12172, avg=10008.33, stdev=562.56 00:25:44.932 lat (usec): min=6538, max=12191, avg=10024.88, stdev=562.08 00:25:44.932 clat percentiles (usec): 00:25:44.933 | 1.00th=[ 9503], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:25:44.933 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:44.933 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:25:44.933 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:25:44.933 | 99.99th=[12125] 00:25:44.933 bw ( KiB/s): min=33792, max=39936, per=33.25%, avg=38144.00, stdev=2103.25, samples=9 00:25:44.933 iops : min= 264, max= 312, avg=298.00, stdev=16.43, samples=9 00:25:44.933 lat (msec) : 10=72.16%, 20=27.84% 00:25:44.933 cpu : usr=93.94%, sys=5.38%, ctx=57, majf=0, minf=0 00:25:44.933 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.933 filename0: (groupid=0, jobs=1): err= 0: pid=100306: Thu Nov 28 11:56:13 2024 00:25:44.933 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(187MiB/5002msec) 00:25:44.933 slat (nsec): min=6040, max=91366, avg=15835.19, stdev=10799.70 00:25:44.933 clat (usec): min=4048, max=12423, avg=9986.90, stdev=610.25 00:25:44.933 lat (usec): min=4057, max=12458, avg=10002.73, stdev=608.48 00:25:44.933 clat percentiles (usec): 00:25:44.933 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:25:44.933 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:44.933 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:25:44.933 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12387], 99.95th=[12387], 00:25:44.933 | 99.99th=[12387] 00:25:44.933 bw ( KiB/s): min=34560, max=39936, per=33.32%, avg=38229.33, stdev=1987.09, samples=9 00:25:44.933 iops : min= 270, max= 312, avg=298.67, stdev=15.52, samples=9 00:25:44.933 lat (msec) : 10=71.81%, 20=28.19% 00:25:44.933 cpu : usr=93.48%, sys=5.72%, ctx=12, majf=0, minf=0 00:25:44.933 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.933 filename0: (groupid=0, jobs=1): err= 0: pid=100307: Thu Nov 28 11:56:13 2024 00:25:44.933 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(187MiB/5004msec) 00:25:44.933 slat (nsec): min=6084, max=71058, avg=16210.81, stdev=10373.66 00:25:44.933 clat (usec): min=6598, max=12218, avg=10010.95, stdev=553.96 00:25:44.933 lat (usec): min=6616, max=12231, avg=10027.16, stdev=552.35 00:25:44.933 clat percentiles (usec): 00:25:44.933 | 1.00th=[ 9503], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:25:44.933 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:25:44.933 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[10814], 95.00th=[11207], 00:25:44.933 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12256], 99.95th=[12256], 00:25:44.933 | 99.99th=[12256] 00:25:44.933 bw ( KiB/s): min=34491, max=39936, per=33.24%, avg=38136.33, stdev=2047.22, samples=9 00:25:44.933 iops : min= 269, max= 312, avg=297.89, stdev=16.10, samples=9 00:25:44.933 lat (msec) : 10=71.89%, 20=28.11% 00:25:44.933 cpu : usr=94.28%, sys=5.16%, ctx=6, majf=0, minf=0 00:25:44.933 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.933 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.933 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.933 00:25:44.933 Run status group 0 (all jobs): 00:25:44.933 READ: bw=112MiB/s (117MB/s), 37.3MiB/s-37.4MiB/s (39.1MB/s-39.2MB/s), io=561MiB (588MB), run=5002-5004msec 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 bdev_null0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 [2024-11-28 11:56:14.056329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 bdev_null1 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 bdev_null2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.933 { 00:25:44.933 "params": { 00:25:44.933 "name": "Nvme$subsystem", 00:25:44.933 "trtype": "$TEST_TRANSPORT", 00:25:44.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.933 "adrfam": "ipv4", 00:25:44.933 "trsvcid": "$NVMF_PORT", 00:25:44.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.933 "hdgst": ${hdgst:-false}, 00:25:44.933 "ddgst": ${ddgst:-false} 00:25:44.933 }, 00:25:44.933 "method": "bdev_nvme_attach_controller" 00:25:44.933 } 00:25:44.933 EOF 00:25:44.933 )") 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.933 { 00:25:44.933 "params": { 00:25:44.933 "name": "Nvme$subsystem", 00:25:44.933 "trtype": "$TEST_TRANSPORT", 00:25:44.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.933 "adrfam": "ipv4", 00:25:44.933 "trsvcid": "$NVMF_PORT", 00:25:44.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.933 "hdgst": ${hdgst:-false}, 00:25:44.933 "ddgst": ${ddgst:-false} 00:25:44.933 }, 00:25:44.933 "method": "bdev_nvme_attach_controller" 00:25:44.933 } 00:25:44.933 EOF 00:25:44.933 )") 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.933 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.933 { 00:25:44.933 "params": { 00:25:44.933 "name": "Nvme$subsystem", 00:25:44.933 "trtype": "$TEST_TRANSPORT", 00:25:44.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.933 "adrfam": "ipv4", 00:25:44.933 "trsvcid": "$NVMF_PORT", 00:25:44.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.933 "hdgst": ${hdgst:-false}, 00:25:44.933 "ddgst": ${ddgst:-false} 00:25:44.933 }, 00:25:44.933 "method": "bdev_nvme_attach_controller" 00:25:44.933 } 00:25:44.933 EOF 00:25:44.933 )") 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:44.934 "params": { 00:25:44.934 "name": "Nvme0", 00:25:44.934 "trtype": "tcp", 00:25:44.934 "traddr": "10.0.0.3", 00:25:44.934 "adrfam": "ipv4", 00:25:44.934 "trsvcid": "4420", 00:25:44.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:44.934 "hdgst": false, 00:25:44.934 "ddgst": false 00:25:44.934 }, 00:25:44.934 "method": "bdev_nvme_attach_controller" 00:25:44.934 },{ 00:25:44.934 "params": { 00:25:44.934 "name": "Nvme1", 00:25:44.934 "trtype": "tcp", 00:25:44.934 "traddr": "10.0.0.3", 00:25:44.934 "adrfam": "ipv4", 00:25:44.934 "trsvcid": "4420", 00:25:44.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.934 "hdgst": false, 00:25:44.934 "ddgst": false 00:25:44.934 }, 00:25:44.934 "method": "bdev_nvme_attach_controller" 00:25:44.934 },{ 00:25:44.934 "params": { 00:25:44.934 "name": "Nvme2", 00:25:44.934 "trtype": "tcp", 00:25:44.934 "traddr": "10.0.0.3", 00:25:44.934 "adrfam": "ipv4", 00:25:44.934 "trsvcid": "4420", 00:25:44.934 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:44.934 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:44.934 "hdgst": false, 00:25:44.934 "ddgst": false 00:25:44.934 }, 00:25:44.934 "method": "bdev_nvme_attach_controller" 00:25:44.934 }' 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:44.934 11:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.934 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.934 ... 00:25:44.934 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.934 ... 00:25:44.934 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.934 ... 00:25:44.934 fio-3.35 00:25:44.934 Starting 24 threads 00:25:57.144 00:25:57.144 filename0: (groupid=0, jobs=1): err= 0: pid=100402: Thu Nov 28 11:56:25 2024 00:25:57.144 read: IOPS=248, BW=995KiB/s (1019kB/s)(9.77MiB/10050msec) 00:25:57.144 slat (usec): min=4, max=4046, avg=25.32, stdev=172.63 00:25:57.144 clat (msec): min=3, max=121, avg=64.13, stdev=18.03 00:25:57.144 lat (msec): min=3, max=121, avg=64.16, stdev=18.03 00:25:57.144 clat percentiles (msec): 00:25:57.144 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:25:57.144 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 68], 00:25:57.144 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 91], 95.00th=[ 97], 00:25:57.144 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:25:57.144 | 99.99th=[ 123] 00:25:57.144 bw ( KiB/s): min= 720, max= 1408, per=4.07%, avg=994.30, stdev=136.86, samples=20 00:25:57.144 iops : min= 180, max= 352, avg=248.55, stdev=34.23, samples=20 00:25:57.144 lat (msec) : 4=0.08%, 20=1.12%, 50=22.00%, 100=73.04%, 250=3.76% 00:25:57.144 cpu : usr=43.12%, sys=1.80%, ctx=1371, majf=0, minf=9 00:25:57.144 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=76.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:57.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.144 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.144 filename0: (groupid=0, jobs=1): err= 0: pid=100403: Thu Nov 28 11:56:25 2024 00:25:57.144 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.80MiB/10060msec) 00:25:57.144 slat (usec): min=6, max=8028, avg=28.71, stdev=276.89 00:25:57.144 clat (msec): min=3, max=141, avg=63.92, stdev=19.49 00:25:57.144 lat (msec): min=3, max=141, avg=63.95, stdev=19.49 00:25:57.144 clat percentiles (msec): 00:25:57.144 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:25:57.144 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:25:57.144 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 94], 95.00th=[ 96], 00:25:57.144 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 132], 00:25:57.144 | 99.99th=[ 142] 00:25:57.144 bw ( KiB/s): min= 712, max= 1648, per=4.08%, avg=997.60, stdev=181.50, samples=20 00:25:57.144 iops : min= 178, max= 412, avg=249.40, stdev=45.38, samples=20 00:25:57.144 lat (msec) : 4=0.08%, 10=1.12%, 20=1.35%, 50=23.23%, 100=70.88% 00:25:57.144 lat (msec) : 250=3.35% 00:25:57.144 cpu : usr=34.23%, sys=1.32%, ctx=899, majf=0, minf=9 00:25:57.144 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:25:57.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 issued rwts: total=2510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.144 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.144 filename0: (groupid=0, jobs=1): err= 0: pid=100404: Thu Nov 28 11:56:25 2024 00:25:57.144 read: IOPS=249, BW=998KiB/s (1022kB/s)(9996KiB/10016msec) 00:25:57.144 slat (usec): min=4, max=12037, avg=37.64, stdev=381.10 00:25:57.144 clat (msec): min=21, max=144, avg=63.96, stdev=17.21 00:25:57.144 lat (msec): min=21, max=144, avg=64.00, stdev=17.21 00:25:57.144 clat percentiles (msec): 00:25:57.144 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:25:57.144 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:25:57.144 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 96], 00:25:57.144 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 129], 00:25:57.144 | 99.99th=[ 144] 00:25:57.144 bw ( KiB/s): min= 712, max= 1200, per=4.07%, avg=993.25, stdev=112.66, samples=20 00:25:57.144 iops : min= 178, max= 300, avg=248.30, stdev=28.17, samples=20 00:25:57.144 lat (msec) : 50=25.93%, 100=71.35%, 250=2.72% 00:25:57.144 cpu : usr=38.60%, sys=1.38%, ctx=1172, majf=0, minf=9 00:25:57.144 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:25:57.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.144 issued rwts: total=2499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.144 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.144 filename0: (groupid=0, jobs=1): err= 0: pid=100405: Thu Nov 28 11:56:25 2024 00:25:57.144 read: IOPS=257, BW=1030KiB/s (1054kB/s)(10.1MiB/10080msec) 00:25:57.144 slat (usec): min=6, max=8041, avg=23.90, stdev=236.96 00:25:57.144 clat (usec): min=974, max=137856, avg=61934.94, stdev=21162.77 00:25:57.144 lat (usec): min=981, max=137864, avg=61958.84, stdev=21160.81 00:25:57.144 clat percentiles (msec): 00:25:57.144 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 40], 20.00th=[ 47], 00:25:57.144 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:25:57.145 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 92], 95.00th=[ 96], 00:25:57.145 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 121], 00:25:57.145 | 99.99th=[ 138] 00:25:57.145 bw ( KiB/s): min= 712, max= 1904, per=4.22%, avg=1031.60, stdev=233.45, samples=20 00:25:57.145 iops : min= 178, max= 476, avg=257.90, stdev=58.36, samples=20 00:25:57.145 lat (usec) : 1000=0.08% 00:25:57.145 lat (msec) : 4=1.16%, 10=2.31%, 20=1.31%, 50=23.66%, 100=68.02% 00:25:57.145 lat (msec) : 250=3.47% 00:25:57.145 cpu : usr=37.68%, sys=1.50%, ctx=1074, majf=0, minf=0 00:25:57.145 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=78.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.145 filename0: (groupid=0, jobs=1): err= 0: pid=100406: Thu Nov 28 11:56:25 2024 00:25:57.145 read: IOPS=240, BW=963KiB/s (986kB/s)(9668KiB/10041msec) 00:25:57.145 slat (usec): min=3, max=8036, avg=37.05, stdev=345.54 00:25:57.145 clat (msec): min=18, max=130, avg=66.27, stdev=18.64 00:25:57.145 lat (msec): min=18, max=130, avg=66.31, stdev=18.64 00:25:57.145 clat percentiles (msec): 00:25:57.145 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 00:25:57.145 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:25:57.145 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 103], 00:25:57.145 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 131], 00:25:57.145 | 99.99th=[ 131] 00:25:57.145 bw ( KiB/s): min= 768, max= 1154, per=3.93%, avg=960.35, stdev=119.60, samples=20 00:25:57.145 iops : min= 192, max= 288, avg=240.05, stdev=29.85, samples=20 00:25:57.145 lat (msec) : 20=0.58%, 50=20.73%, 100=73.02%, 250=5.67% 00:25:57.145 cpu : usr=40.03%, sys=1.76%, ctx=1067, majf=0, minf=9 00:25:57.145 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=74.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=89.5%, 8=8.7%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.145 filename0: (groupid=0, jobs=1): err= 0: pid=100407: Thu Nov 28 11:56:25 2024 00:25:57.145 read: IOPS=254, BW=1016KiB/s (1040kB/s)(9.95MiB/10023msec) 00:25:57.145 slat (usec): min=4, max=8046, avg=28.26, stdev=239.91 00:25:57.145 clat (msec): min=22, max=144, avg=62.85, stdev=17.34 00:25:57.145 lat (msec): min=22, max=144, avg=62.88, stdev=17.35 00:25:57.145 clat percentiles (msec): 00:25:57.145 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 00:25:57.145 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 67], 00:25:57.145 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 90], 95.00th=[ 96], 00:25:57.145 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 132], 00:25:57.145 | 99.99th=[ 144] 00:25:57.145 bw ( KiB/s): min= 760, max= 1256, per=4.14%, avg=1012.85, stdev=111.84, samples=20 00:25:57.145 iops : min= 190, max= 314, avg=253.20, stdev=27.97, samples=20 00:25:57.145 lat (msec) : 50=27.61%, 100=69.32%, 250=3.06% 00:25:57.145 cpu : usr=37.44%, sys=1.52%, ctx=1443, majf=0, minf=9 00:25:57.145 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.145 filename0: (groupid=0, jobs=1): err= 0: pid=100408: Thu Nov 28 11:56:25 2024 00:25:57.145 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.2MiB/10036msec) 00:25:57.145 slat (usec): min=3, max=7808, avg=32.91, stdev=266.24 00:25:57.145 clat (msec): min=21, max=122, avg=61.59, stdev=17.17 00:25:57.145 lat (msec): min=21, max=122, avg=61.62, stdev=17.18 00:25:57.145 clat percentiles (msec): 00:25:57.145 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 46], 00:25:57.145 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:25:57.145 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 88], 95.00th=[ 96], 00:25:57.145 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 123], 99.95th=[ 123], 00:25:57.145 | 99.99th=[ 123] 00:25:57.145 bw ( KiB/s): min= 768, max= 1152, per=4.23%, avg=1033.05, stdev=103.91, samples=20 00:25:57.145 iops : min= 192, max= 288, avg=258.25, stdev=25.98, samples=20 00:25:57.145 lat (msec) : 50=30.70%, 100=66.76%, 250=2.54% 00:25:57.145 cpu : usr=44.07%, sys=1.57%, ctx=1307, majf=0, minf=9 00:25:57.145 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.145 filename0: (groupid=0, jobs=1): err= 0: pid=100409: Thu Nov 28 11:56:25 2024 00:25:57.145 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.3MiB/10020msec) 00:25:57.145 slat (usec): min=4, max=9027, avg=38.54, stdev=383.70 00:25:57.145 clat (msec): min=23, max=134, avg=60.86, stdev=17.88 00:25:57.145 lat (msec): min=23, max=134, avg=60.90, stdev=17.88 00:25:57.145 clat percentiles (msec): 00:25:57.145 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 46], 00:25:57.145 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:25:57.145 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 96], 00:25:57.145 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 134], 00:25:57.145 | 99.99th=[ 136] 00:25:57.145 bw ( KiB/s): min= 768, max= 1232, per=4.28%, avg=1044.10, stdev=118.24, samples=20 00:25:57.145 iops : min= 192, max= 308, avg=261.00, stdev=29.59, samples=20 00:25:57.145 lat (msec) : 50=35.48%, 100=61.48%, 250=3.05% 00:25:57.145 cpu : usr=31.85%, sys=1.30%, ctx=961, majf=0, minf=9 00:25:57.145 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.145 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.145 filename1: (groupid=0, jobs=1): err= 0: pid=100410: Thu Nov 28 11:56:25 2024 00:25:57.145 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.0MiB/10007msec) 00:25:57.145 slat (usec): min=4, max=8038, avg=33.42, stdev=296.05 00:25:57.145 clat (msec): min=16, max=126, avg=62.22, stdev=17.74 00:25:57.145 lat (msec): min=16, max=126, avg=62.26, stdev=17.73 00:25:57.145 clat percentiles (msec): 00:25:57.145 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:57.145 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:25:57.145 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 96], 00:25:57.145 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 127], 00:25:57.145 | 99.99th=[ 127] 00:25:57.145 bw ( KiB/s): min= 768, max= 1248, per=4.23%, avg=1033.68, stdev=106.05, samples=19 00:25:57.145 iops : min= 192, max= 312, avg=258.42, stdev=26.51, samples=19 00:25:57.145 lat (msec) : 20=0.23%, 50=29.96%, 100=66.19%, 250=3.62% 00:25:57.145 cpu : usr=33.79%, sys=1.10%, ctx=925, majf=0, minf=9 00:25:57.145 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:57.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.145 issued rwts: total=2567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100411: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=262, BW=1052KiB/s (1077kB/s)(10.3MiB/10027msec) 00:25:57.146 slat (usec): min=4, max=8052, avg=27.55, stdev=233.02 00:25:57.146 clat (msec): min=22, max=123, avg=60.68, stdev=17.36 00:25:57.146 lat (msec): min=22, max=123, avg=60.71, stdev=17.36 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 45], 00:25:57.146 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 65], 00:25:57.146 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:25:57.146 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 124], 00:25:57.146 | 99.99th=[ 124] 00:25:57.146 bw ( KiB/s): min= 792, max= 1256, per=4.30%, avg=1050.05, stdev=118.01, samples=20 00:25:57.146 iops : min= 198, max= 314, avg=262.50, stdev=29.51, samples=20 00:25:57.146 lat (msec) : 50=31.79%, 100=65.02%, 250=3.19% 00:25:57.146 cpu : usr=39.67%, sys=1.59%, ctx=1350, majf=0, minf=9 00:25:57.146 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:57.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100412: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=255, BW=1022KiB/s (1046kB/s)(10.0MiB/10042msec) 00:25:57.146 slat (usec): min=4, max=12046, avg=31.73, stdev=335.77 00:25:57.146 clat (msec): min=20, max=127, avg=62.46, stdev=17.65 00:25:57.146 lat (msec): min=20, max=127, avg=62.49, stdev=17.66 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:25:57.146 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:25:57.146 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 96], 00:25:57.146 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:25:57.146 | 99.99th=[ 128] 00:25:57.146 bw ( KiB/s): min= 680, max= 1216, per=4.17%, avg=1019.45, stdev=129.62, samples=20 00:25:57.146 iops : min= 170, max= 304, avg=254.85, stdev=32.39, samples=20 00:25:57.146 lat (msec) : 50=28.58%, 100=67.95%, 250=3.47% 00:25:57.146 cpu : usr=36.63%, sys=1.34%, ctx=1101, majf=0, minf=9 00:25:57.146 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:57.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 issued rwts: total=2565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100413: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=255, BW=1023KiB/s (1047kB/s)(10.1MiB/10080msec) 00:25:57.146 slat (usec): min=6, max=4035, avg=18.72, stdev=79.79 00:25:57.146 clat (usec): min=938, max=118007, avg=62448.54, stdev=21001.91 00:25:57.146 lat (usec): min=949, max=118038, avg=62467.26, stdev=21003.33 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 48], 00:25:57.146 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 68], 00:25:57.146 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 97], 00:25:57.146 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 116], 99.95th=[ 117], 00:25:57.146 | 99.99th=[ 118] 00:25:57.146 bw ( KiB/s): min= 768, max= 2048, per=4.19%, avg=1024.40, stdev=261.41, samples=20 00:25:57.146 iops : min= 192, max= 512, avg=256.10, stdev=65.35, samples=20 00:25:57.146 lat (usec) : 1000=0.08% 00:25:57.146 lat (msec) : 4=1.63%, 10=2.02%, 20=1.24%, 50=20.92%, 100=70.55% 00:25:57.146 lat (msec) : 250=3.57% 00:25:57.146 cpu : usr=44.77%, sys=1.77%, ctx=1552, majf=0, minf=0 00:25:57.146 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=78.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:57.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 issued rwts: total=2577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100414: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=257, BW=1031KiB/s (1055kB/s)(10.1MiB/10052msec) 00:25:57.146 slat (usec): min=3, max=10050, avg=38.09, stdev=340.20 00:25:57.146 clat (msec): min=11, max=143, avg=61.88, stdev=18.03 00:25:57.146 lat (msec): min=11, max=144, avg=61.92, stdev=18.04 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 47], 00:25:57.146 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 65], 00:25:57.146 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 96], 00:25:57.146 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 136], 00:25:57.146 | 99.99th=[ 144] 00:25:57.146 bw ( KiB/s): min= 712, max= 1408, per=4.22%, avg=1031.05, stdev=146.10, samples=20 00:25:57.146 iops : min= 178, max= 352, avg=257.75, stdev=36.51, samples=20 00:25:57.146 lat (msec) : 20=1.24%, 50=26.99%, 100=68.11%, 250=3.67% 00:25:57.146 cpu : usr=42.33%, sys=1.72%, ctx=1361, majf=0, minf=9 00:25:57.146 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:57.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 issued rwts: total=2590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100415: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.1MiB/10054msec) 00:25:57.146 slat (usec): min=5, max=8031, avg=43.38, stdev=397.99 00:25:57.146 clat (msec): min=5, max=120, avg=62.25, stdev=18.18 00:25:57.146 lat (msec): min=5, max=120, avg=62.30, stdev=18.18 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:25:57.146 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 66], 00:25:57.146 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 97], 00:25:57.146 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 112], 00:25:57.146 | 99.99th=[ 121] 00:25:57.146 bw ( KiB/s): min= 712, max= 1440, per=4.19%, avg=1023.85, stdev=149.82, samples=20 00:25:57.146 iops : min= 178, max= 360, avg=255.95, stdev=37.43, samples=20 00:25:57.146 lat (msec) : 10=0.58%, 20=1.28%, 50=25.85%, 100=69.18%, 250=3.11% 00:25:57.146 cpu : usr=38.61%, sys=1.33%, ctx=1107, majf=0, minf=9 00:25:57.146 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:25:57.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.146 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.146 filename1: (groupid=0, jobs=1): err= 0: pid=100416: Thu Nov 28 11:56:25 2024 00:25:57.146 read: IOPS=254, BW=1018KiB/s (1043kB/s)(10.0MiB/10073msec) 00:25:57.146 slat (usec): min=3, max=8034, avg=24.56, stdev=224.02 00:25:57.146 clat (msec): min=3, max=130, avg=62.63, stdev=20.60 00:25:57.146 lat (msec): min=3, max=130, avg=62.65, stdev=20.61 00:25:57.146 clat percentiles (msec): 00:25:57.146 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 48], 00:25:57.146 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 68], 00:25:57.147 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 95], 95.00th=[ 96], 00:25:57.147 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 121], 00:25:57.147 | 99.99th=[ 131] 00:25:57.147 bw ( KiB/s): min= 664, max= 1761, per=4.17%, avg=1017.95, stdev=216.87, samples=20 00:25:57.147 iops : min= 166, max= 440, avg=254.45, stdev=54.14, samples=20 00:25:57.147 lat (msec) : 4=0.08%, 10=2.34%, 20=1.33%, 50=23.67%, 100=68.68% 00:25:57.147 lat (msec) : 250=3.90% 00:25:57.147 cpu : usr=34.20%, sys=1.23%, ctx=956, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.7%, 4=2.3%, 8=80.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:25:57.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.147 filename1: (groupid=0, jobs=1): err= 0: pid=100417: Thu Nov 28 11:56:25 2024 00:25:57.147 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10024msec) 00:25:57.147 slat (usec): min=4, max=8040, avg=41.54, stdev=394.17 00:25:57.147 clat (msec): min=24, max=120, avg=62.03, stdev=16.94 00:25:57.147 lat (msec): min=24, max=120, avg=62.07, stdev=16.96 00:25:57.147 clat percentiles (msec): 00:25:57.147 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 48], 00:25:57.147 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 65], 00:25:57.147 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 96], 00:25:57.147 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 115], 99.95th=[ 116], 00:25:57.147 | 99.99th=[ 121] 00:25:57.147 bw ( KiB/s): min= 712, max= 1192, per=4.20%, avg=1025.20, stdev=106.36, samples=20 00:25:57.147 iops : min= 178, max= 298, avg=256.30, stdev=26.59, samples=20 00:25:57.147 lat (msec) : 50=31.37%, 100=66.15%, 250=2.48% 00:25:57.147 cpu : usr=35.19%, sys=1.03%, ctx=1003, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:57.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.147 filename2: (groupid=0, jobs=1): err= 0: pid=100418: Thu Nov 28 11:56:25 2024 00:25:57.147 read: IOPS=253, BW=1015KiB/s (1040kB/s)(9.98MiB/10061msec) 00:25:57.147 slat (usec): min=6, max=8032, avg=28.46, stdev=313.26 00:25:57.147 clat (msec): min=3, max=133, avg=62.83, stdev=19.37 00:25:57.147 lat (msec): min=3, max=134, avg=62.86, stdev=19.39 00:25:57.147 clat percentiles (msec): 00:25:57.147 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:25:57.147 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:25:57.147 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 90], 95.00th=[ 96], 00:25:57.147 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 127], 00:25:57.147 | 99.99th=[ 134] 00:25:57.147 bw ( KiB/s): min= 720, max= 1656, per=4.16%, avg=1015.20, stdev=187.78, samples=20 00:25:57.147 iops : min= 180, max= 414, avg=253.80, stdev=46.94, samples=20 00:25:57.147 lat (msec) : 4=0.08%, 10=1.10%, 20=1.33%, 50=25.37%, 100=68.75% 00:25:57.147 lat (msec) : 250=3.37% 00:25:57.147 cpu : usr=32.20%, sys=1.14%, ctx=948, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:25:57.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 issued rwts: total=2554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.147 filename2: (groupid=0, jobs=1): err= 0: pid=100419: Thu Nov 28 11:56:25 2024 00:25:57.147 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10012msec) 00:25:57.147 slat (usec): min=4, max=8020, avg=33.95, stdev=248.21 00:25:57.147 clat (msec): min=20, max=139, avg=61.18, stdev=17.69 00:25:57.147 lat (msec): min=20, max=139, avg=61.21, stdev=17.70 00:25:57.147 clat percentiles (msec): 00:25:57.147 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:25:57.147 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:25:57.147 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 96], 00:25:57.147 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 140], 00:25:57.147 | 99.99th=[ 140] 00:25:57.147 bw ( KiB/s): min= 768, max= 1208, per=4.25%, avg=1038.40, stdev=108.75, samples=20 00:25:57.147 iops : min= 192, max= 302, avg=259.60, stdev=27.19, samples=20 00:25:57.147 lat (msec) : 50=33.50%, 100=63.44%, 250=3.06% 00:25:57.147 cpu : usr=38.15%, sys=1.29%, ctx=1195, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:25:57.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.147 filename2: (groupid=0, jobs=1): err= 0: pid=100420: Thu Nov 28 11:56:25 2024 00:25:57.147 read: IOPS=261, BW=1044KiB/s (1069kB/s)(10.2MiB/10022msec) 00:25:57.147 slat (usec): min=4, max=8034, avg=38.69, stdev=358.81 00:25:57.147 clat (msec): min=25, max=119, avg=61.08, stdev=17.15 00:25:57.147 lat (msec): min=26, max=119, avg=61.12, stdev=17.15 00:25:57.147 clat percentiles (msec): 00:25:57.147 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 47], 00:25:57.147 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:25:57.147 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:25:57.147 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 115], 99.95th=[ 115], 00:25:57.147 | 99.99th=[ 121] 00:25:57.147 bw ( KiB/s): min= 768, max= 1232, per=4.27%, avg=1042.85, stdev=111.55, samples=20 00:25:57.147 iops : min= 192, max= 308, avg=260.70, stdev=27.90, samples=20 00:25:57.147 lat (msec) : 50=32.95%, 100=64.37%, 250=2.68% 00:25:57.147 cpu : usr=37.91%, sys=1.30%, ctx=1071, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:57.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.147 issued rwts: total=2616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.147 filename2: (groupid=0, jobs=1): err= 0: pid=100421: Thu Nov 28 11:56:25 2024 00:25:57.147 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.78MiB/10024msec) 00:25:57.147 slat (usec): min=5, max=8027, avg=28.13, stdev=240.23 00:25:57.147 clat (msec): min=24, max=119, avg=63.93, stdev=16.86 00:25:57.147 lat (msec): min=24, max=119, avg=63.95, stdev=16.87 00:25:57.147 clat percentiles (msec): 00:25:57.147 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:25:57.147 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 69], 00:25:57.147 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 90], 95.00th=[ 96], 00:25:57.147 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 112], 00:25:57.147 | 99.99th=[ 121] 00:25:57.147 bw ( KiB/s): min= 768, max= 1200, per=4.07%, avg=994.80, stdev=102.66, samples=20 00:25:57.147 iops : min= 192, max= 300, avg=248.70, stdev=25.66, samples=20 00:25:57.147 lat (msec) : 50=25.97%, 100=71.55%, 250=2.48% 00:25:57.147 cpu : usr=32.94%, sys=0.97%, ctx=917, majf=0, minf=9 00:25:57.147 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:25:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.148 filename2: (groupid=0, jobs=1): err= 0: pid=100422: Thu Nov 28 11:56:25 2024 00:25:57.148 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.1MiB/10019msec) 00:25:57.148 slat (usec): min=5, max=10046, avg=31.69, stdev=277.27 00:25:57.148 clat (msec): min=24, max=115, avg=61.95, stdev=17.03 00:25:57.148 lat (msec): min=24, max=115, avg=61.98, stdev=17.03 00:25:57.148 clat percentiles (msec): 00:25:57.148 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 46], 00:25:57.148 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 65], 00:25:57.148 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 96], 00:25:57.148 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 116], 00:25:57.148 | 99.99th=[ 116] 00:25:57.148 bw ( KiB/s): min= 736, max= 1192, per=4.20%, avg=1026.15, stdev=111.79, samples=20 00:25:57.148 iops : min= 184, max= 298, avg=256.50, stdev=27.95, samples=20 00:25:57.148 lat (msec) : 50=29.71%, 100=67.43%, 250=2.87% 00:25:57.148 cpu : usr=41.08%, sys=1.91%, ctx=1455, majf=0, minf=10 00:25:57.148 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.148 filename2: (groupid=0, jobs=1): err= 0: pid=100423: Thu Nov 28 11:56:25 2024 00:25:57.148 read: IOPS=236, BW=946KiB/s (969kB/s)(9480KiB/10018msec) 00:25:57.148 slat (usec): min=4, max=8036, avg=37.78, stdev=388.24 00:25:57.148 clat (msec): min=23, max=121, avg=67.47, stdev=16.97 00:25:57.148 lat (msec): min=23, max=121, avg=67.51, stdev=16.96 00:25:57.148 clat percentiles (msec): 00:25:57.148 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:25:57.148 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 00:25:57.148 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 99], 00:25:57.148 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 120], 00:25:57.148 | 99.99th=[ 123] 00:25:57.148 bw ( KiB/s): min= 744, max= 1152, per=3.85%, avg=941.45, stdev=107.97, samples=20 00:25:57.148 iops : min= 186, max= 288, avg=235.35, stdev=26.98, samples=20 00:25:57.148 lat (msec) : 50=19.11%, 100=77.59%, 250=3.29% 00:25:57.148 cpu : usr=31.92%, sys=1.28%, ctx=964, majf=0, minf=9 00:25:57.148 IO depths : 1=0.1%, 2=2.2%, 4=8.6%, 8=73.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:25:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 complete : 0=0.0%, 4=89.9%, 8=8.3%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.148 filename2: (groupid=0, jobs=1): err= 0: pid=100424: Thu Nov 28 11:56:25 2024 00:25:57.148 read: IOPS=263, BW=1055KiB/s (1080kB/s)(10.3MiB/10012msec) 00:25:57.148 slat (usec): min=3, max=8004, avg=29.76, stdev=221.92 00:25:57.148 clat (msec): min=23, max=127, avg=60.51, stdev=17.46 00:25:57.148 lat (msec): min=23, max=127, avg=60.54, stdev=17.46 00:25:57.148 clat percentiles (msec): 00:25:57.148 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 45], 00:25:57.148 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:25:57.148 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 95], 00:25:57.148 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 117], 99.95th=[ 128], 00:25:57.148 | 99.99th=[ 128] 00:25:57.148 bw ( KiB/s): min= 768, max= 1200, per=4.30%, avg=1050.05, stdev=114.45, samples=20 00:25:57.148 iops : min= 192, max= 300, avg=262.50, stdev=28.63, samples=20 00:25:57.148 lat (msec) : 50=34.27%, 100=62.63%, 250=3.10% 00:25:57.148 cpu : usr=43.33%, sys=1.69%, ctx=1532, majf=0, minf=9 00:25:57.148 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 issued rwts: total=2641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.148 filename2: (groupid=0, jobs=1): err= 0: pid=100425: Thu Nov 28 11:56:25 2024 00:25:57.148 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10001msec) 00:25:57.148 slat (usec): min=4, max=9089, avg=36.64, stdev=353.62 00:25:57.148 clat (usec): min=849, max=123066, avg=59064.10, stdev=19493.07 00:25:57.148 lat (usec): min=855, max=123106, avg=59100.74, stdev=19495.25 00:25:57.148 clat percentiles (usec): 00:25:57.148 | 1.00th=[ 1352], 5.00th=[ 35914], 10.00th=[ 37487], 20.00th=[ 45351], 00:25:57.148 | 30.00th=[ 47973], 40.00th=[ 51119], 50.00th=[ 59507], 60.00th=[ 61604], 00:25:57.148 | 70.00th=[ 69731], 80.00th=[ 71828], 90.00th=[ 84411], 95.00th=[ 95945], 00:25:57.148 | 99.00th=[107480], 99.50th=[108528], 99.90th=[123208], 99.95th=[123208], 00:25:57.148 | 99.99th=[123208] 00:25:57.148 bw ( KiB/s): min= 793, max= 1200, per=4.35%, avg=1063.89, stdev=89.09, samples=19 00:25:57.148 iops : min= 198, max= 300, avg=265.89, stdev=22.31, samples=19 00:25:57.148 lat (usec) : 1000=0.48% 00:25:57.148 lat (msec) : 2=1.37%, 4=0.30%, 10=0.22%, 20=0.15%, 50=35.04% 00:25:57.148 lat (msec) : 100=60.45%, 250=2.00% 00:25:57.148 cpu : usr=32.35%, sys=1.08%, ctx=899, majf=0, minf=9 00:25:57.148 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.148 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.148 00:25:57.148 Run status group 0 (all jobs): 00:25:57.148 READ: bw=23.8MiB/s (25.0MB/s), 946KiB/s-1081KiB/s (969kB/s-1107kB/s), io=240MiB (252MB), run=10001-10080msec 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.148 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 bdev_null0 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 [2024-11-28 11:56:25.456156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 bdev_null1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.149 { 00:25:57.149 "params": { 00:25:57.149 "name": "Nvme$subsystem", 00:25:57.149 "trtype": "$TEST_TRANSPORT", 00:25:57.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.149 "adrfam": "ipv4", 00:25:57.149 "trsvcid": "$NVMF_PORT", 00:25:57.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.149 "hdgst": ${hdgst:-false}, 00:25:57.149 "ddgst": ${ddgst:-false} 00:25:57.149 }, 00:25:57.149 "method": "bdev_nvme_attach_controller" 00:25:57.149 } 00:25:57.149 EOF 00:25:57.149 )") 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:57.149 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:57.150 { 00:25:57.150 "params": { 00:25:57.150 "name": "Nvme$subsystem", 00:25:57.150 "trtype": "$TEST_TRANSPORT", 00:25:57.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.150 "adrfam": "ipv4", 00:25:57.150 "trsvcid": "$NVMF_PORT", 00:25:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.150 "hdgst": ${hdgst:-false}, 00:25:57.150 "ddgst": ${ddgst:-false} 00:25:57.150 }, 00:25:57.150 "method": "bdev_nvme_attach_controller" 00:25:57.150 } 00:25:57.150 EOF 00:25:57.150 )") 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:57.150 "params": { 00:25:57.150 "name": "Nvme0", 00:25:57.150 "trtype": "tcp", 00:25:57.150 "traddr": "10.0.0.3", 00:25:57.150 "adrfam": "ipv4", 00:25:57.150 "trsvcid": "4420", 00:25:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.150 "hdgst": false, 00:25:57.150 "ddgst": false 00:25:57.150 }, 00:25:57.150 "method": "bdev_nvme_attach_controller" 00:25:57.150 },{ 00:25:57.150 "params": { 00:25:57.150 "name": "Nvme1", 00:25:57.150 "trtype": "tcp", 00:25:57.150 "traddr": "10.0.0.3", 00:25:57.150 "adrfam": "ipv4", 00:25:57.150 "trsvcid": "4420", 00:25:57.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.150 "hdgst": false, 00:25:57.150 "ddgst": false 00:25:57.150 }, 00:25:57.150 "method": "bdev_nvme_attach_controller" 00:25:57.150 }' 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:57.150 11:56:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.150 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.150 ... 00:25:57.150 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.150 ... 00:25:57.150 fio-3.35 00:25:57.150 Starting 4 threads 00:26:01.346 00:26:01.346 filename0: (groupid=0, jobs=1): err= 0: pid=100568: Thu Nov 28 11:56:31 2024 00:26:01.346 read: IOPS=2231, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5002msec) 00:26:01.346 slat (usec): min=3, max=104, avg=22.68, stdev= 8.93 00:26:01.346 clat (usec): min=797, max=6775, avg=3515.29, stdev=830.13 00:26:01.346 lat (usec): min=805, max=6784, avg=3537.97, stdev=830.02 00:26:01.346 clat percentiles (usec): 00:26:01.346 | 1.00th=[ 1450], 5.00th=[ 2008], 10.00th=[ 2278], 20.00th=[ 2671], 00:26:01.346 | 30.00th=[ 3195], 40.00th=[ 3458], 50.00th=[ 3687], 60.00th=[ 3884], 00:26:01.346 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:26:01.346 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5800], 99.95th=[ 6259], 00:26:01.346 | 99.99th=[ 6521] 00:26:01.346 bw ( KiB/s): min=15472, max=20080, per=23.67%, avg=17904.00, stdev=1629.98, samples=9 00:26:01.346 iops : min= 1934, max= 2510, avg=2238.00, stdev=203.75, samples=9 00:26:01.346 lat (usec) : 1000=0.08% 00:26:01.346 lat (msec) : 2=4.85%, 4=62.81%, 10=32.26% 00:26:01.346 cpu : usr=94.64%, sys=4.54%, ctx=8, majf=0, minf=9 00:26:01.346 IO depths : 1=1.6%, 2=9.9%, 4=58.4%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 issued rwts: total=11161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:01.346 filename0: (groupid=0, jobs=1): err= 0: pid=100569: Thu Nov 28 11:56:31 2024 00:26:01.346 read: IOPS=2487, BW=19.4MiB/s (20.4MB/s)(97.2MiB/5002msec) 00:26:01.346 slat (nsec): min=6352, max=84721, avg=14240.64, stdev=9469.88 00:26:01.346 clat (usec): min=699, max=6428, avg=3176.27, stdev=906.52 00:26:01.346 lat (usec): min=710, max=6462, avg=3190.51, stdev=907.26 00:26:01.346 clat percentiles (usec): 00:26:01.346 | 1.00th=[ 1004], 5.00th=[ 1582], 10.00th=[ 1893], 20.00th=[ 2311], 00:26:01.346 | 30.00th=[ 2638], 40.00th=[ 3064], 50.00th=[ 3294], 60.00th=[ 3589], 00:26:01.346 | 70.00th=[ 3785], 80.00th=[ 4015], 90.00th=[ 4228], 95.00th=[ 4424], 00:26:01.346 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5473], 00:26:01.346 | 99.99th=[ 6390] 00:26:01.346 bw ( KiB/s): min=16944, max=21744, per=26.27%, avg=19864.89, stdev=1413.49, samples=9 00:26:01.346 iops : min= 2118, max= 2718, avg=2483.11, stdev=176.69, samples=9 00:26:01.346 lat (usec) : 750=0.02%, 1000=0.96% 00:26:01.346 lat (msec) : 2=11.52%, 4=67.10%, 10=20.40% 00:26:01.346 cpu : usr=93.38%, sys=5.56%, ctx=14, majf=0, minf=0 00:26:01.346 IO depths : 1=0.5%, 2=4.7%, 4=61.8%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 issued rwts: total=12442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:01.346 filename1: (groupid=0, jobs=1): err= 0: pid=100570: Thu Nov 28 11:56:31 2024 00:26:01.346 read: IOPS=2550, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5002msec) 00:26:01.346 slat (usec): min=3, max=166, avg=15.82, stdev=10.54 00:26:01.346 clat (usec): min=402, max=6433, avg=3094.15, stdev=938.20 00:26:01.346 lat (usec): min=413, max=6470, avg=3109.98, stdev=939.20 00:26:01.346 clat percentiles (usec): 00:26:01.346 | 1.00th=[ 971], 5.00th=[ 1287], 10.00th=[ 1811], 20.00th=[ 2212], 00:26:01.346 | 30.00th=[ 2606], 40.00th=[ 2966], 50.00th=[ 3228], 60.00th=[ 3425], 00:26:01.346 | 70.00th=[ 3720], 80.00th=[ 3982], 90.00th=[ 4293], 95.00th=[ 4424], 00:26:01.346 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5800], 00:26:01.346 | 99.99th=[ 6390] 00:26:01.346 bw ( KiB/s): min=18816, max=22528, per=26.91%, avg=20349.11, stdev=1330.42, samples=9 00:26:01.346 iops : min= 2352, max= 2816, avg=2543.56, stdev=166.38, samples=9 00:26:01.346 lat (usec) : 500=0.01%, 750=0.03%, 1000=1.24% 00:26:01.346 lat (msec) : 2=13.43%, 4=66.49%, 10=18.80% 00:26:01.346 cpu : usr=94.20%, sys=4.76%, ctx=24, majf=0, minf=0 00:26:01.346 IO depths : 1=0.5%, 2=2.4%, 4=62.9%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 issued rwts: total=12759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:01.346 filename1: (groupid=0, jobs=1): err= 0: pid=100571: Thu Nov 28 11:56:31 2024 00:26:01.346 read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5001msec) 00:26:01.346 slat (nsec): min=3583, max=81958, avg=21395.24, stdev=10712.56 00:26:01.346 clat (usec): min=384, max=6468, avg=3596.61, stdev=851.11 00:26:01.346 lat (usec): min=396, max=6487, avg=3618.00, stdev=850.68 00:26:01.346 clat percentiles (usec): 00:26:01.346 | 1.00th=[ 1074], 5.00th=[ 2008], 10.00th=[ 2343], 20.00th=[ 2900], 00:26:01.346 | 30.00th=[ 3294], 40.00th=[ 3523], 50.00th=[ 3752], 60.00th=[ 3949], 00:26:01.346 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:26:01.346 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 5800], 99.95th=[ 5800], 00:26:01.346 | 99.99th=[ 6259] 00:26:01.346 bw ( KiB/s): min=14976, max=19088, per=23.11%, avg=17477.33, stdev=1433.34, samples=9 00:26:01.346 iops : min= 1872, max= 2386, avg=2184.67, stdev=179.17, samples=9 00:26:01.346 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.39% 00:26:01.346 lat (msec) : 2=4.51%, 4=57.16%, 10=37.92% 00:26:01.346 cpu : usr=93.96%, sys=5.14%, ctx=63, majf=0, minf=9 00:26:01.346 IO depths : 1=1.3%, 2=12.1%, 4=57.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.346 issued rwts: total=10921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:01.346 00:26:01.346 Run status group 0 (all jobs): 00:26:01.346 READ: bw=73.8MiB/s (77.4MB/s), 17.1MiB/s-19.9MiB/s (17.9MB/s-20.9MB/s), io=369MiB (387MB), run=5001-5002msec 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 ************************************ 00:26:01.607 END TEST fio_dif_rand_params 00:26:01.607 ************************************ 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 00:26:01.607 real 0m23.558s 00:26:01.607 user 2m5.728s 00:26:01.607 sys 0m6.136s 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:01.607 11:56:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:01.607 11:56:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 ************************************ 00:26:01.607 START TEST fio_dif_digest 00:26:01.607 ************************************ 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 bdev_null0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:01.607 [2024-11-28 11:56:31.647497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:01.607 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:01.607 { 00:26:01.607 "params": { 00:26:01.607 "name": "Nvme$subsystem", 00:26:01.607 "trtype": "$TEST_TRANSPORT", 00:26:01.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.607 "adrfam": "ipv4", 00:26:01.608 "trsvcid": "$NVMF_PORT", 00:26:01.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.608 "hdgst": ${hdgst:-false}, 00:26:01.608 "ddgst": ${ddgst:-false} 00:26:01.608 }, 00:26:01.608 "method": "bdev_nvme_attach_controller" 00:26:01.608 } 00:26:01.608 EOF 00:26:01.608 )") 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:01.608 "params": { 00:26:01.608 "name": "Nvme0", 00:26:01.608 "trtype": "tcp", 00:26:01.608 "traddr": "10.0.0.3", 00:26:01.608 "adrfam": "ipv4", 00:26:01.608 "trsvcid": "4420", 00:26:01.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.608 "hdgst": true, 00:26:01.608 "ddgst": true 00:26:01.608 }, 00:26:01.608 "method": "bdev_nvme_attach_controller" 00:26:01.608 }' 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:01.608 11:56:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.867 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:01.868 ... 00:26:01.868 fio-3.35 00:26:01.868 Starting 3 threads 00:26:14.086 00:26:14.086 filename0: (groupid=0, jobs=1): err= 0: pid=100677: Thu Nov 28 11:56:42 2024 00:26:14.086 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10004msec) 00:26:14.086 slat (usec): min=6, max=119, avg=25.05, stdev=13.55 00:26:14.086 clat (usec): min=10263, max=12827, avg=11146.87, stdev=391.43 00:26:14.086 lat (usec): min=10275, max=12860, avg=11171.92, stdev=392.53 00:26:14.086 clat percentiles (usec): 00:26:14.086 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:26:14.086 | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:26:14.086 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:26:14.086 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:26:14.086 | 99.99th=[12780] 00:26:14.086 bw ( KiB/s): min=33024, max=35328, per=33.38%, avg=34317.47, stdev=629.81, samples=19 00:26:14.086 iops : min= 258, max= 276, avg=268.11, stdev= 4.92, samples=19 00:26:14.086 lat (msec) : 20=100.00% 00:26:14.086 cpu : usr=94.10%, sys=5.07%, ctx=53, majf=0, minf=0 00:26:14.086 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.086 issued rwts: total=2679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.086 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.086 filename0: (groupid=0, jobs=1): err= 0: pid=100678: Thu Nov 28 11:56:42 2024 00:26:14.086 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10004msec) 00:26:14.086 slat (usec): min=6, max=109, avg=25.15, stdev=13.31 00:26:14.086 clat (usec): min=10255, max=12827, avg=11144.60, stdev=391.23 00:26:14.086 lat (usec): min=10267, max=12861, avg=11169.75, stdev=392.07 00:26:14.086 clat percentiles (usec): 00:26:14.086 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:26:14.086 | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:26:14.086 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:26:14.086 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:26:14.086 | 99.99th=[12780] 00:26:14.086 bw ( KiB/s): min=33024, max=35328, per=33.38%, avg=34317.47, stdev=629.81, samples=19 00:26:14.086 iops : min= 258, max= 276, avg=268.11, stdev= 4.92, samples=19 00:26:14.086 lat (msec) : 20=100.00% 00:26:14.086 cpu : usr=95.05%, sys=4.41%, ctx=7, majf=0, minf=0 00:26:14.086 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.086 issued rwts: total=2679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.086 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.086 filename0: (groupid=0, jobs=1): err= 0: pid=100679: Thu Nov 28 11:56:42 2024 00:26:14.086 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10010msec) 00:26:14.086 slat (usec): min=6, max=104, avg=19.32, stdev=10.19 00:26:14.086 clat (usec): min=5731, max=12848, avg=11154.24, stdev=427.03 00:26:14.086 lat (usec): min=5741, max=12882, avg=11173.56, stdev=427.75 00:26:14.086 clat percentiles (usec): 00:26:14.086 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:26:14.086 | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:26:14.086 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:26:14.086 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:26:14.086 | 99.99th=[12911] 00:26:14.087 bw ( KiB/s): min=33792, max=35328, per=33.41%, avg=34350.47, stdev=487.49, samples=19 00:26:14.087 iops : min= 264, max= 276, avg=268.32, stdev= 3.73, samples=19 00:26:14.087 lat (msec) : 10=0.11%, 20=99.89% 00:26:14.087 cpu : usr=95.85%, sys=3.67%, ctx=13, majf=0, minf=0 00:26:14.087 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.087 issued rwts: total=2682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.087 00:26:14.087 Run status group 0 (all jobs): 00:26:14.087 READ: bw=100MiB/s (105MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=1005MiB (1054MB), run=10004-10010msec 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 ************************************ 00:26:14.087 END TEST fio_dif_digest 00:26:14.087 ************************************ 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.087 00:26:14.087 real 0m11.009s 00:26:14.087 user 0m29.170s 00:26:14.087 sys 0m1.595s 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.087 11:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 11:56:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:14.087 11:56:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.087 rmmod nvme_tcp 00:26:14.087 rmmod nvme_fabrics 00:26:14.087 rmmod nvme_keyring 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 99925 ']' 00:26:14.087 11:56:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 99925 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 99925 ']' 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 99925 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99925 00:26:14.087 killing process with pid 99925 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99925' 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 99925 00:26:14.087 11:56:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 99925 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:14.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:14.087 Waiting for block devices as requested 00:26:14.087 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:14.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.087 11:56:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:14.087 11:56:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.087 11:56:43 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:26:14.087 00:26:14.087 real 0m59.732s 00:26:14.087 user 3m49.007s 00:26:14.087 sys 0m17.937s 00:26:14.087 11:56:43 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.087 ************************************ 00:26:14.087 11:56:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 END TEST nvmf_dif 00:26:14.087 ************************************ 00:26:14.087 11:56:43 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:14.087 11:56:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:14.087 11:56:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.087 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:26:14.087 ************************************ 00:26:14.087 START TEST nvmf_abort_qd_sizes 00:26:14.087 ************************************ 00:26:14.087 11:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:14.087 * Looking for test storage... 00:26:14.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.087 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:14.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.087 --rc genhtml_branch_coverage=1 00:26:14.087 --rc genhtml_function_coverage=1 00:26:14.087 --rc genhtml_legend=1 00:26:14.087 --rc geninfo_all_blocks=1 00:26:14.087 --rc geninfo_unexecuted_blocks=1 00:26:14.087 00:26:14.088 ' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:14.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.088 --rc genhtml_branch_coverage=1 00:26:14.088 --rc genhtml_function_coverage=1 00:26:14.088 --rc genhtml_legend=1 00:26:14.088 --rc geninfo_all_blocks=1 00:26:14.088 --rc geninfo_unexecuted_blocks=1 00:26:14.088 00:26:14.088 ' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:14.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.088 --rc genhtml_branch_coverage=1 00:26:14.088 --rc genhtml_function_coverage=1 00:26:14.088 --rc genhtml_legend=1 00:26:14.088 --rc geninfo_all_blocks=1 00:26:14.088 --rc geninfo_unexecuted_blocks=1 00:26:14.088 00:26:14.088 ' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:14.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.088 --rc genhtml_branch_coverage=1 00:26:14.088 --rc genhtml_function_coverage=1 00:26:14.088 --rc genhtml_legend=1 00:26:14.088 --rc geninfo_all_blocks=1 00:26:14.088 --rc geninfo_unexecuted_blocks=1 00:26:14.088 00:26:14.088 ' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:14.088 Cannot find device "nvmf_init_br" 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:14.088 Cannot find device "nvmf_init_br2" 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:14.088 Cannot find device "nvmf_tgt_br" 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:26:14.088 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.348 Cannot find device "nvmf_tgt_br2" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:14.348 Cannot find device "nvmf_init_br" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:14.348 Cannot find device "nvmf_init_br2" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:14.348 Cannot find device "nvmf_tgt_br" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:14.348 Cannot find device "nvmf_tgt_br2" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:14.348 Cannot find device "nvmf_br" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:14.348 Cannot find device "nvmf_init_if" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:14.348 Cannot find device "nvmf_init_if2" 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:14.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:14.348 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:14.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:14.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:26:14.608 00:26:14.608 --- 10.0.0.3 ping statistics --- 00:26:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.608 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:14.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:14.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:26:14.608 00:26:14.608 --- 10.0.0.4 ping statistics --- 00:26:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.608 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:14.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:14.608 00:26:14.608 --- 10.0.0.1 ping statistics --- 00:26:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.608 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:14.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:26:14.608 00:26:14.608 --- 10.0.0.2 ping statistics --- 00:26:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.608 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:14.608 11:56:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:15.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:15.176 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:15.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=101328 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 101328 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 101328 ']' 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.435 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:15.435 [2024-11-28 11:56:45.511166] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:15.435 [2024-11-28 11:56:45.511267] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.694 [2024-11-28 11:56:45.639279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:15.694 [2024-11-28 11:56:45.669374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.694 [2024-11-28 11:56:45.712523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.694 [2024-11-28 11:56:45.712603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.694 [2024-11-28 11:56:45.712618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.694 [2024-11-28 11:56:45.712629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.694 [2024-11-28 11:56:45.712639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.694 [2024-11-28 11:56:45.713997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.694 [2024-11-28 11:56:45.714142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.694 [2024-11-28 11:56:45.714264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.694 [2024-11-28 11:56:45.714264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.694 [2024-11-28 11:56:45.779270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:26:15.954 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.955 11:56:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 ************************************ 00:26:15.955 START TEST spdk_target_abort 00:26:15.955 ************************************ 00:26:15.955 11:56:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:26:15.955 11:56:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:15.955 11:56:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:26:15.955 11:56:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.955 11:56:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 spdk_targetn1 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 [2024-11-28 11:56:46.013374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:15.955 [2024-11-28 11:56:46.049790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:15.955 11:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:19.247 Initializing NVMe Controllers 00:26:19.247 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:19.247 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:19.247 Initialization complete. Launching workers. 00:26:19.247 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9075, failed: 0 00:26:19.247 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1042, failed to submit 8033 00:26:19.247 success 702, unsuccessful 340, failed 0 00:26:19.247 11:56:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:19.247 11:56:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:23.445 Initializing NVMe Controllers 00:26:23.445 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:23.445 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:23.445 Initialization complete. Launching workers. 00:26:23.445 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9001, failed: 0 00:26:23.445 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1145, failed to submit 7856 00:26:23.445 success 408, unsuccessful 737, failed 0 00:26:23.445 11:56:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:23.445 11:56:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.984 Initializing NVMe Controllers 00:26:25.984 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:25.984 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:25.984 Initialization complete. Launching workers. 00:26:25.984 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30903, failed: 0 00:26:25.984 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2285, failed to submit 28618 00:26:25.984 success 411, unsuccessful 1874, failed 0 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.984 11:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 101328 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 101328 ']' 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 101328 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101328 00:26:26.252 killing process with pid 101328 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101328' 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 101328 00:26:26.252 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 101328 00:26:26.551 00:26:26.551 real 0m10.526s 00:26:26.551 user 0m40.626s 00:26:26.551 sys 0m1.957s 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.551 ************************************ 00:26:26.551 END TEST spdk_target_abort 00:26:26.551 ************************************ 00:26:26.551 11:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:26.551 11:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:26.551 11:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.551 11:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:26.551 ************************************ 00:26:26.551 START TEST kernel_target_abort 00:26:26.551 ************************************ 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:26.551 11:56:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:26.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.826 Waiting for block devices as requested 00:26:27.085 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:27.085 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:27.085 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:27.345 No valid GPT data, bailing 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:27.345 No valid GPT data, bailing 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:27.345 No valid GPT data, bailing 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:27.345 No valid GPT data, bailing 00:26:27.345 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:26:27.346 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:27.605 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c --hostid=f820f793-c892-4aa4-a8a4-5ed3fda41d6c -a 10.0.0.1 -t tcp -s 4420 00:26:27.605 00:26:27.605 Discovery Log Number of Records 2, Generation counter 2 00:26:27.605 =====Discovery Log Entry 0====== 00:26:27.605 trtype: tcp 00:26:27.605 adrfam: ipv4 00:26:27.606 subtype: current discovery subsystem 00:26:27.606 treq: not specified, sq flow control disable supported 00:26:27.606 portid: 1 00:26:27.606 trsvcid: 4420 00:26:27.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:27.606 traddr: 10.0.0.1 00:26:27.606 eflags: none 00:26:27.606 sectype: none 00:26:27.606 =====Discovery Log Entry 1====== 00:26:27.606 trtype: tcp 00:26:27.606 adrfam: ipv4 00:26:27.606 subtype: nvme subsystem 00:26:27.606 treq: not specified, sq flow control disable supported 00:26:27.606 portid: 1 00:26:27.606 trsvcid: 4420 00:26:27.606 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:27.606 traddr: 10.0.0.1 00:26:27.606 eflags: none 00:26:27.606 sectype: none 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:27.606 11:56:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:30.898 Initializing NVMe Controllers 00:26:30.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:30.898 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:30.898 Initialization complete. Launching workers. 00:26:30.898 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36450, failed: 0 00:26:30.898 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36450, failed to submit 0 00:26:30.898 success 0, unsuccessful 36450, failed 0 00:26:30.898 11:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.898 11:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:34.190 Initializing NVMe Controllers 00:26:34.190 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:34.190 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:34.190 Initialization complete. Launching workers. 00:26:34.190 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79024, failed: 0 00:26:34.190 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34228, failed to submit 44796 00:26:34.190 success 0, unsuccessful 34228, failed 0 00:26:34.190 11:57:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:34.190 11:57:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:37.495 Initializing NVMe Controllers 00:26:37.495 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:37.495 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:37.495 Initialization complete. Launching workers. 00:26:37.495 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89954, failed: 0 00:26:37.495 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22470, failed to submit 67484 00:26:37.495 success 0, unsuccessful 22470, failed 0 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:37.495 11:57:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:37.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:39.132 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:39.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:39.132 ************************************ 00:26:39.132 END TEST kernel_target_abort 00:26:39.132 ************************************ 00:26:39.132 00:26:39.132 real 0m12.508s 00:26:39.132 user 0m6.184s 00:26:39.132 sys 0m3.659s 00:26:39.132 11:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.132 11:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:39.132 11:57:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:39.132 11:57:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:39.132 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.132 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.133 rmmod nvme_tcp 00:26:39.133 rmmod nvme_fabrics 00:26:39.133 rmmod nvme_keyring 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.133 Process with pid 101328 is not found 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 101328 ']' 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 101328 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 101328 ']' 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 101328 00:26:39.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101328) - No such process 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 101328 is not found' 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:39.133 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:39.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:39.701 Waiting for block devices as requested 00:26:39.701 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:39.701 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:39.701 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.960 11:57:09 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.960 11:57:10 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:39.960 11:57:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.960 11:57:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:39.961 11:57:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.961 11:57:10 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:26:39.961 00:26:39.961 real 0m26.126s 00:26:39.961 user 0m48.011s 00:26:39.961 sys 0m7.118s 00:26:39.961 11:57:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.961 ************************************ 00:26:39.961 END TEST nvmf_abort_qd_sizes 00:26:39.961 ************************************ 00:26:39.961 11:57:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:40.220 11:57:10 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:40.220 11:57:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:40.220 11:57:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.220 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:26:40.220 ************************************ 00:26:40.220 START TEST keyring_file 00:26:40.220 ************************************ 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:40.220 * Looking for test storage... 00:26:40.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.220 --rc genhtml_branch_coverage=1 00:26:40.220 --rc genhtml_function_coverage=1 00:26:40.220 --rc genhtml_legend=1 00:26:40.220 --rc geninfo_all_blocks=1 00:26:40.220 --rc geninfo_unexecuted_blocks=1 00:26:40.220 00:26:40.220 ' 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.220 --rc genhtml_branch_coverage=1 00:26:40.220 --rc genhtml_function_coverage=1 00:26:40.220 --rc genhtml_legend=1 00:26:40.220 --rc geninfo_all_blocks=1 00:26:40.220 --rc geninfo_unexecuted_blocks=1 00:26:40.220 00:26:40.220 ' 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.220 --rc genhtml_branch_coverage=1 00:26:40.220 --rc genhtml_function_coverage=1 00:26:40.220 --rc genhtml_legend=1 00:26:40.220 --rc geninfo_all_blocks=1 00:26:40.220 --rc geninfo_unexecuted_blocks=1 00:26:40.220 00:26:40.220 ' 00:26:40.220 11:57:10 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.220 --rc genhtml_branch_coverage=1 00:26:40.220 --rc genhtml_function_coverage=1 00:26:40.220 --rc genhtml_legend=1 00:26:40.220 --rc geninfo_all_blocks=1 00:26:40.220 --rc geninfo_unexecuted_blocks=1 00:26:40.220 00:26:40.220 ' 00:26:40.220 11:57:10 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:40.220 11:57:10 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.220 11:57:10 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.220 11:57:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.220 11:57:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.220 11:57:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.220 11:57:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.220 11:57:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:40.221 11:57:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.221 11:57:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.221 11:57:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UjxKqgZsHK 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UjxKqgZsHK 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UjxKqgZsHK 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.UjxKqgZsHK 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gXfjc7vMgj 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:40.480 11:57:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gXfjc7vMgj 00:26:40.480 11:57:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gXfjc7vMgj 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gXfjc7vMgj 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=102230 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:40.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.480 11:57:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 102230 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 102230 ']' 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.480 11:57:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:40.480 [2024-11-28 11:57:10.545788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:40.480 [2024-11-28 11:57:10.545899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102230 ] 00:26:40.739 [2024-11-28 11:57:10.673002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:40.739 [2024-11-28 11:57:10.698652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.740 [2024-11-28 11:57:10.740137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.740 [2024-11-28 11:57:10.824778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:40.998 11:57:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:40.998 [2024-11-28 11:57:11.064601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.998 null0 00:26:40.998 [2024-11-28 11:57:11.096576] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:40.998 [2024-11-28 11:57:11.096786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.998 11:57:11 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.998 11:57:11 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:41.256 [2024-11-28 11:57:11.128564] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:41.256 request: 00:26:41.256 { 00:26:41.256 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:41.256 "secure_channel": false, 00:26:41.256 "listen_address": { 00:26:41.256 "trtype": "tcp", 00:26:41.256 "traddr": "127.0.0.1", 00:26:41.256 "trsvcid": "4420" 00:26:41.256 }, 00:26:41.256 "method": "nvmf_subsystem_add_listener", 00:26:41.256 "req_id": 1 00:26:41.256 } 00:26:41.256 Got JSON-RPC error response 00:26:41.256 response: 00:26:41.256 { 00:26:41.256 "code": -32602, 00:26:41.256 "message": "Invalid parameters" 00:26:41.256 } 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.256 11:57:11 keyring_file -- keyring/file.sh@47 -- # bperfpid=102241 00:26:41.256 11:57:11 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:41.256 11:57:11 keyring_file -- keyring/file.sh@49 -- # waitforlisten 102241 /var/tmp/bperf.sock 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 102241 ']' 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.256 11:57:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:41.256 [2024-11-28 11:57:11.202120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:41.256 [2024-11-28 11:57:11.202472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102241 ] 00:26:41.256 [2024-11-28 11:57:11.328376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:41.256 [2024-11-28 11:57:11.360164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.514 [2024-11-28 11:57:11.399741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.514 [2024-11-28 11:57:11.457677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:42.080 11:57:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.080 11:57:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:42.080 11:57:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:42.080 11:57:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:42.339 11:57:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gXfjc7vMgj 00:26:42.339 11:57:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gXfjc7vMgj 00:26:42.597 11:57:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:26:42.597 11:57:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:42.597 11:57:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:42.597 11:57:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:42.597 11:57:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:42.855 11:57:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.UjxKqgZsHK == \/\t\m\p\/\t\m\p\.\U\j\x\K\q\g\Z\s\H\K ]] 00:26:42.855 11:57:12 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:26:42.855 11:57:12 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:26:42.855 11:57:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:42.855 11:57:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:42.855 11:57:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:43.112 11:57:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.gXfjc7vMgj == \/\t\m\p\/\t\m\p\.\g\X\f\j\c\7\v\M\g\j ]] 00:26:43.112 11:57:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:26:43.112 11:57:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:43.112 11:57:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:43.112 11:57:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:43.112 11:57:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.112 11:57:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:43.370 11:57:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:43.370 11:57:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:26:43.370 11:57:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:43.370 11:57:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:43.370 11:57:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:43.370 11:57:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.370 11:57:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:43.628 11:57:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:26:43.628 11:57:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:43.628 11:57:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:43.886 [2024-11-28 11:57:13.773384] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:43.886 nvme0n1 00:26:43.886 11:57:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:26:43.886 11:57:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:43.886 11:57:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:43.886 11:57:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:43.886 11:57:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.886 11:57:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:44.143 11:57:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:26:44.143 11:57:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:26:44.143 11:57:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:44.143 11:57:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:44.143 11:57:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:44.143 11:57:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:44.143 11:57:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:44.401 11:57:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:26:44.401 11:57:14 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.401 Running I/O for 1 seconds... 00:26:45.776 13430.00 IOPS, 52.46 MiB/s 00:26:45.776 Latency(us) 00:26:45.776 [2024-11-28T11:57:15.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.776 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:45.776 nvme0n1 : 1.01 13481.35 52.66 0.00 0.00 9472.34 3425.75 14775.39 00:26:45.776 [2024-11-28T11:57:15.902Z] =================================================================================================================== 00:26:45.776 [2024-11-28T11:57:15.902Z] Total : 13481.35 52.66 0.00 0.00 9472.34 3425.75 14775.39 00:26:45.776 { 00:26:45.776 "results": [ 00:26:45.776 { 00:26:45.776 "job": "nvme0n1", 00:26:45.776 "core_mask": "0x2", 00:26:45.776 "workload": "randrw", 00:26:45.776 "percentage": 50, 00:26:45.776 "status": "finished", 00:26:45.776 "queue_depth": 128, 00:26:45.776 "io_size": 4096, 00:26:45.776 "runtime": 1.005686, 00:26:45.776 "iops": 13481.345071921058, 00:26:45.776 "mibps": 52.66150418719163, 00:26:45.776 "io_failed": 0, 00:26:45.776 "io_timeout": 0, 00:26:45.776 "avg_latency_us": 9472.338774557793, 00:26:45.776 "min_latency_us": 3425.7454545454543, 00:26:45.776 "max_latency_us": 14775.389090909091 00:26:45.776 } 00:26:45.776 ], 00:26:45.776 "core_count": 1 00:26:45.776 } 00:26:45.776 11:57:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:45.776 11:57:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:45.776 11:57:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:46.037 11:57:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:46.037 11:57:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:26:46.037 11:57:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:46.037 11:57:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:46.037 11:57:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:46.037 11:57:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:46.037 11:57:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:46.296 11:57:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:26:46.296 11:57:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.296 11:57:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:46.296 11:57:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:46.555 [2024-11-28 11:57:16.470977] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:46.555 [2024-11-28 11:57:16.471884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x617c50 (107): Transport endpoint is not connected 00:26:46.555 [2024-11-28 11:57:16.472872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x617c50 (9): Bad file descriptor 00:26:46.555 [2024-11-28 11:57:16.473869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:46.555 [2024-11-28 11:57:16.473885] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:46.555 [2024-11-28 11:57:16.473894] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:46.555 [2024-11-28 11:57:16.473904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:46.555 request: 00:26:46.555 { 00:26:46.555 "name": "nvme0", 00:26:46.555 "trtype": "tcp", 00:26:46.555 "traddr": "127.0.0.1", 00:26:46.555 "adrfam": "ipv4", 00:26:46.555 "trsvcid": "4420", 00:26:46.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:46.555 "prchk_reftag": false, 00:26:46.555 "prchk_guard": false, 00:26:46.555 "hdgst": false, 00:26:46.555 "ddgst": false, 00:26:46.555 "psk": "key1", 00:26:46.555 "allow_unrecognized_csi": false, 00:26:46.555 "method": "bdev_nvme_attach_controller", 00:26:46.555 "req_id": 1 00:26:46.555 } 00:26:46.555 Got JSON-RPC error response 00:26:46.555 response: 00:26:46.555 { 00:26:46.555 "code": -5, 00:26:46.555 "message": "Input/output error" 00:26:46.555 } 00:26:46.555 11:57:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:46.555 11:57:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:46.555 11:57:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:46.555 11:57:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:46.555 11:57:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:26:46.555 11:57:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:46.555 11:57:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:46.555 11:57:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:46.555 11:57:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:46.555 11:57:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:46.814 11:57:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:46.814 11:57:16 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:26:46.814 11:57:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:46.814 11:57:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:46.814 11:57:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:46.814 11:57:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:46.814 11:57:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:47.073 11:57:17 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:26:47.073 11:57:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:26:47.073 11:57:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:47.331 11:57:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:26:47.331 11:57:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:47.590 11:57:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:26:47.590 11:57:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:26:47.590 11:57:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:47.849 11:57:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:26:47.849 11:57:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.UjxKqgZsHK 00:26:47.849 11:57:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.849 11:57:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:47.849 11:57:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:48.108 [2024-11-28 11:57:17.984364] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UjxKqgZsHK': 0100660 00:26:48.108 [2024-11-28 11:57:17.984581] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:48.108 request: 00:26:48.108 { 00:26:48.108 "name": "key0", 00:26:48.108 "path": "/tmp/tmp.UjxKqgZsHK", 00:26:48.108 "method": "keyring_file_add_key", 00:26:48.108 "req_id": 1 00:26:48.108 } 00:26:48.108 Got JSON-RPC error response 00:26:48.108 response: 00:26:48.108 { 00:26:48.108 "code": -1, 00:26:48.108 "message": "Operation not permitted" 00:26:48.108 } 00:26:48.108 11:57:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:48.108 11:57:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.108 11:57:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.108 11:57:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.108 11:57:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.UjxKqgZsHK 00:26:48.108 11:57:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:48.108 11:57:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UjxKqgZsHK 00:26:48.368 11:57:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.UjxKqgZsHK 00:26:48.368 11:57:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:26:48.368 11:57:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:48.368 11:57:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:48.368 11:57:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:48.368 11:57:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:48.368 11:57:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:48.627 11:57:18 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:26:48.627 11:57:18 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.627 11:57:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:48.627 11:57:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:48.627 [2024-11-28 11:57:18.729727] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.UjxKqgZsHK': No such file or directory 00:26:48.627 [2024-11-28 11:57:18.729767] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:48.627 [2024-11-28 11:57:18.729786] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:48.627 [2024-11-28 11:57:18.729795] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:26:48.627 [2024-11-28 11:57:18.729803] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:48.628 [2024-11-28 11:57:18.729810] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:48.628 request: 00:26:48.628 { 00:26:48.628 "name": "nvme0", 00:26:48.628 "trtype": "tcp", 00:26:48.628 "traddr": "127.0.0.1", 00:26:48.628 "adrfam": "ipv4", 00:26:48.628 "trsvcid": "4420", 00:26:48.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:48.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:48.628 "prchk_reftag": false, 00:26:48.628 "prchk_guard": false, 00:26:48.628 "hdgst": false, 00:26:48.628 "ddgst": false, 00:26:48.628 "psk": "key0", 00:26:48.628 "allow_unrecognized_csi": false, 00:26:48.628 "method": "bdev_nvme_attach_controller", 00:26:48.628 "req_id": 1 00:26:48.628 } 00:26:48.628 Got JSON-RPC error response 00:26:48.628 response: 00:26:48.628 { 00:26:48.628 "code": -19, 00:26:48.628 "message": "No such device" 00:26:48.628 } 00:26:48.628 11:57:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:48.628 11:57:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.628 11:57:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.628 11:57:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.628 11:57:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:26:48.628 11:57:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:48.887 11:57:18 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YS1jM6zIjw 00:26:48.887 11:57:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:26:48.887 11:57:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:26:49.147 11:57:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YS1jM6zIjw 00:26:49.147 11:57:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YS1jM6zIjw 00:26:49.147 11:57:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YS1jM6zIjw 00:26:49.147 11:57:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YS1jM6zIjw 00:26:49.147 11:57:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YS1jM6zIjw 00:26:49.406 11:57:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:49.406 11:57:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:49.665 nvme0n1 00:26:49.665 11:57:19 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:26:49.665 11:57:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:49.665 11:57:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:49.665 11:57:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:49.665 11:57:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:49.665 11:57:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:49.925 11:57:19 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:26:49.925 11:57:19 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:26:49.925 11:57:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:50.184 11:57:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:26:50.184 11:57:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:26:50.184 11:57:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:50.184 11:57:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:50.184 11:57:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:50.444 11:57:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:26:50.444 11:57:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:26:50.444 11:57:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:50.444 11:57:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:50.444 11:57:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:50.444 11:57:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:50.444 11:57:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:50.703 11:57:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:26:50.703 11:57:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:50.703 11:57:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:50.962 11:57:20 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:26:50.962 11:57:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:50.962 11:57:20 keyring_file -- keyring/file.sh@105 -- # jq length 00:26:51.221 11:57:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:26:51.221 11:57:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YS1jM6zIjw 00:26:51.221 11:57:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YS1jM6zIjw 00:26:51.480 11:57:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gXfjc7vMgj 00:26:51.480 11:57:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gXfjc7vMgj 00:26:51.480 11:57:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:51.480 11:57:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:52.049 nvme0n1 00:26:52.049 11:57:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:26:52.049 11:57:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:52.310 11:57:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:26:52.310 "subsystems": [ 00:26:52.310 { 00:26:52.310 "subsystem": "keyring", 00:26:52.310 "config": [ 00:26:52.310 { 00:26:52.310 "method": "keyring_file_add_key", 00:26:52.310 "params": { 00:26:52.310 "name": "key0", 00:26:52.310 "path": "/tmp/tmp.YS1jM6zIjw" 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "keyring_file_add_key", 00:26:52.310 "params": { 00:26:52.310 "name": "key1", 00:26:52.310 "path": "/tmp/tmp.gXfjc7vMgj" 00:26:52.310 } 00:26:52.310 } 00:26:52.310 ] 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "subsystem": "iobuf", 00:26:52.310 "config": [ 00:26:52.310 { 00:26:52.310 "method": "iobuf_set_options", 00:26:52.310 "params": { 00:26:52.310 "small_pool_count": 8192, 00:26:52.310 "large_pool_count": 1024, 00:26:52.310 "small_bufsize": 8192, 00:26:52.310 "large_bufsize": 135168, 00:26:52.310 "enable_numa": false 00:26:52.310 } 00:26:52.310 } 00:26:52.310 ] 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "subsystem": "sock", 00:26:52.310 "config": [ 00:26:52.310 { 00:26:52.310 "method": "sock_set_default_impl", 00:26:52.310 "params": { 00:26:52.310 "impl_name": "uring" 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "sock_impl_set_options", 00:26:52.310 "params": { 00:26:52.310 "impl_name": "ssl", 00:26:52.310 "recv_buf_size": 4096, 00:26:52.310 "send_buf_size": 4096, 00:26:52.310 "enable_recv_pipe": true, 00:26:52.310 "enable_quickack": false, 00:26:52.310 "enable_placement_id": 0, 00:26:52.310 "enable_zerocopy_send_server": true, 00:26:52.310 "enable_zerocopy_send_client": false, 00:26:52.310 "zerocopy_threshold": 0, 00:26:52.310 "tls_version": 0, 00:26:52.310 "enable_ktls": false 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "sock_impl_set_options", 00:26:52.310 "params": { 00:26:52.310 "impl_name": "posix", 00:26:52.310 "recv_buf_size": 2097152, 00:26:52.310 "send_buf_size": 2097152, 00:26:52.310 "enable_recv_pipe": true, 00:26:52.310 "enable_quickack": false, 00:26:52.310 "enable_placement_id": 0, 00:26:52.310 "enable_zerocopy_send_server": true, 00:26:52.310 "enable_zerocopy_send_client": false, 00:26:52.310 "zerocopy_threshold": 0, 00:26:52.310 "tls_version": 0, 00:26:52.310 "enable_ktls": false 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "sock_impl_set_options", 00:26:52.310 "params": { 00:26:52.310 "impl_name": "uring", 00:26:52.310 "recv_buf_size": 2097152, 00:26:52.310 "send_buf_size": 2097152, 00:26:52.310 "enable_recv_pipe": true, 00:26:52.310 "enable_quickack": false, 00:26:52.310 "enable_placement_id": 0, 00:26:52.310 "enable_zerocopy_send_server": false, 00:26:52.310 "enable_zerocopy_send_client": false, 00:26:52.310 "zerocopy_threshold": 0, 00:26:52.310 "tls_version": 0, 00:26:52.310 "enable_ktls": false 00:26:52.310 } 00:26:52.310 } 00:26:52.310 ] 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "subsystem": "vmd", 00:26:52.310 "config": [] 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "subsystem": "accel", 00:26:52.310 "config": [ 00:26:52.310 { 00:26:52.310 "method": "accel_set_options", 00:26:52.310 "params": { 00:26:52.310 "small_cache_size": 128, 00:26:52.310 "large_cache_size": 16, 00:26:52.310 "task_count": 2048, 00:26:52.310 "sequence_count": 2048, 00:26:52.310 "buf_count": 2048 00:26:52.310 } 00:26:52.310 } 00:26:52.310 ] 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "subsystem": "bdev", 00:26:52.310 "config": [ 00:26:52.310 { 00:26:52.310 "method": "bdev_set_options", 00:26:52.310 "params": { 00:26:52.310 "bdev_io_pool_size": 65535, 00:26:52.310 "bdev_io_cache_size": 256, 00:26:52.310 "bdev_auto_examine": true, 00:26:52.310 "iobuf_small_cache_size": 128, 00:26:52.310 "iobuf_large_cache_size": 16 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "bdev_raid_set_options", 00:26:52.310 "params": { 00:26:52.310 "process_window_size_kb": 1024, 00:26:52.310 "process_max_bandwidth_mb_sec": 0 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "bdev_iscsi_set_options", 00:26:52.310 "params": { 00:26:52.310 "timeout_sec": 30 00:26:52.310 } 00:26:52.310 }, 00:26:52.310 { 00:26:52.310 "method": "bdev_nvme_set_options", 00:26:52.310 "params": { 00:26:52.310 "action_on_timeout": "none", 00:26:52.310 "timeout_us": 0, 00:26:52.310 "timeout_admin_us": 0, 00:26:52.310 "keep_alive_timeout_ms": 10000, 00:26:52.310 "arbitration_burst": 0, 00:26:52.310 "low_priority_weight": 0, 00:26:52.310 "medium_priority_weight": 0, 00:26:52.310 "high_priority_weight": 0, 00:26:52.311 "nvme_adminq_poll_period_us": 10000, 00:26:52.311 "nvme_ioq_poll_period_us": 0, 00:26:52.311 "io_queue_requests": 512, 00:26:52.311 "delay_cmd_submit": true, 00:26:52.311 "transport_retry_count": 4, 00:26:52.311 "bdev_retry_count": 3, 00:26:52.311 "transport_ack_timeout": 0, 00:26:52.311 "ctrlr_loss_timeout_sec": 0, 00:26:52.311 "reconnect_delay_sec": 0, 00:26:52.311 "fast_io_fail_timeout_sec": 0, 00:26:52.311 "disable_auto_failback": false, 00:26:52.311 "generate_uuids": false, 00:26:52.311 "transport_tos": 0, 00:26:52.311 "nvme_error_stat": false, 00:26:52.311 "rdma_srq_size": 0, 00:26:52.311 "io_path_stat": false, 00:26:52.311 "allow_accel_sequence": false, 00:26:52.311 "rdma_max_cq_size": 0, 00:26:52.311 "rdma_cm_event_timeout_ms": 0, 00:26:52.311 "dhchap_digests": [ 00:26:52.311 "sha256", 00:26:52.311 "sha384", 00:26:52.311 "sha512" 00:26:52.311 ], 00:26:52.311 "dhchap_dhgroups": [ 00:26:52.311 "null", 00:26:52.311 "ffdhe2048", 00:26:52.311 "ffdhe3072", 00:26:52.311 "ffdhe4096", 00:26:52.311 "ffdhe6144", 00:26:52.311 "ffdhe8192" 00:26:52.311 ] 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "bdev_nvme_attach_controller", 00:26:52.311 "params": { 00:26:52.311 "name": "nvme0", 00:26:52.311 "trtype": "TCP", 00:26:52.311 "adrfam": "IPv4", 00:26:52.311 "traddr": "127.0.0.1", 00:26:52.311 "trsvcid": "4420", 00:26:52.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.311 "prchk_reftag": false, 00:26:52.311 "prchk_guard": false, 00:26:52.311 "ctrlr_loss_timeout_sec": 0, 00:26:52.311 "reconnect_delay_sec": 0, 00:26:52.311 "fast_io_fail_timeout_sec": 0, 00:26:52.311 "psk": "key0", 00:26:52.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:52.311 "hdgst": false, 00:26:52.311 "ddgst": false, 00:26:52.311 "multipath": "multipath" 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "bdev_nvme_set_hotplug", 00:26:52.311 "params": { 00:26:52.311 "period_us": 100000, 00:26:52.311 "enable": false 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "bdev_wait_for_examine" 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "nbd", 00:26:52.311 "config": [] 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }' 00:26:52.311 11:57:22 keyring_file -- keyring/file.sh@115 -- # killprocess 102241 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 102241 ']' 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 102241 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102241 00:26:52.311 killing process with pid 102241 00:26:52.311 Received shutdown signal, test time was about 1.000000 seconds 00:26:52.311 00:26:52.311 Latency(us) 00:26:52.311 [2024-11-28T11:57:22.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.311 [2024-11-28T11:57:22.437Z] =================================================================================================================== 00:26:52.311 [2024-11-28T11:57:22.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102241' 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@973 -- # kill 102241 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@978 -- # wait 102241 00:26:52.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.311 11:57:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=102486 00:26:52.311 11:57:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 102486 /var/tmp/bperf.sock 00:26:52.311 11:57:22 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:52.311 11:57:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 102486 ']' 00:26:52.311 11:57:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:26:52.311 "subsystems": [ 00:26:52.311 { 00:26:52.311 "subsystem": "keyring", 00:26:52.311 "config": [ 00:26:52.311 { 00:26:52.311 "method": "keyring_file_add_key", 00:26:52.311 "params": { 00:26:52.311 "name": "key0", 00:26:52.311 "path": "/tmp/tmp.YS1jM6zIjw" 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "keyring_file_add_key", 00:26:52.311 "params": { 00:26:52.311 "name": "key1", 00:26:52.311 "path": "/tmp/tmp.gXfjc7vMgj" 00:26:52.311 } 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "iobuf", 00:26:52.311 "config": [ 00:26:52.311 { 00:26:52.311 "method": "iobuf_set_options", 00:26:52.311 "params": { 00:26:52.311 "small_pool_count": 8192, 00:26:52.311 "large_pool_count": 1024, 00:26:52.311 "small_bufsize": 8192, 00:26:52.311 "large_bufsize": 135168, 00:26:52.311 "enable_numa": false 00:26:52.311 } 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "sock", 00:26:52.311 "config": [ 00:26:52.311 { 00:26:52.311 "method": "sock_set_default_impl", 00:26:52.311 "params": { 00:26:52.311 "impl_name": "uring" 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "sock_impl_set_options", 00:26:52.311 "params": { 00:26:52.311 "impl_name": "ssl", 00:26:52.311 "recv_buf_size": 4096, 00:26:52.311 "send_buf_size": 4096, 00:26:52.311 "enable_recv_pipe": true, 00:26:52.311 "enable_quickack": false, 00:26:52.311 "enable_placement_id": 0, 00:26:52.311 "enable_zerocopy_send_server": true, 00:26:52.311 "enable_zerocopy_send_client": false, 00:26:52.311 "zerocopy_threshold": 0, 00:26:52.311 "tls_version": 0, 00:26:52.311 "enable_ktls": false 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "sock_impl_set_options", 00:26:52.311 "params": { 00:26:52.311 "impl_name": "posix", 00:26:52.311 "recv_buf_size": 2097152, 00:26:52.311 "send_buf_size": 2097152, 00:26:52.311 "enable_recv_pipe": true, 00:26:52.311 "enable_quickack": false, 00:26:52.311 "enable_placement_id": 0, 00:26:52.311 "enable_zerocopy_send_server": true, 00:26:52.311 "enable_zerocopy_send_client": false, 00:26:52.311 "zerocopy_threshold": 0, 00:26:52.311 "tls_version": 0, 00:26:52.311 "enable_ktls": false 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "sock_impl_set_options", 00:26:52.311 "params": { 00:26:52.311 "impl_name": "uring", 00:26:52.311 "recv_buf_size": 2097152, 00:26:52.311 "send_buf_size": 2097152, 00:26:52.311 "enable_recv_pipe": true, 00:26:52.311 "enable_quickack": false, 00:26:52.311 "enable_placement_id": 0, 00:26:52.311 "enable_zerocopy_send_server": false, 00:26:52.311 "enable_zerocopy_send_client": false, 00:26:52.311 "zerocopy_threshold": 0, 00:26:52.311 "tls_version": 0, 00:26:52.311 "enable_ktls": false 00:26:52.311 } 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "vmd", 00:26:52.311 "config": [] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "accel", 00:26:52.311 "config": [ 00:26:52.311 { 00:26:52.311 "method": "accel_set_options", 00:26:52.311 "params": { 00:26:52.311 "small_cache_size": 128, 00:26:52.311 "large_cache_size": 16, 00:26:52.311 "task_count": 2048, 00:26:52.311 "sequence_count": 2048, 00:26:52.311 "buf_count": 2048 00:26:52.311 } 00:26:52.311 } 00:26:52.311 ] 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "subsystem": "bdev", 00:26:52.311 "config": [ 00:26:52.311 { 00:26:52.311 "method": "bdev_set_options", 00:26:52.311 "params": { 00:26:52.311 "bdev_io_pool_size": 65535, 00:26:52.311 "bdev_io_cache_size": 256, 00:26:52.311 "bdev_auto_examine": true, 00:26:52.311 "iobuf_small_cache_size": 128, 00:26:52.311 "iobuf_large_cache_size": 16 00:26:52.311 } 00:26:52.311 }, 00:26:52.311 { 00:26:52.311 "method": "bdev_raid_set_options", 00:26:52.311 "params": { 00:26:52.312 "process_window_size_kb": 1024, 00:26:52.312 "process_max_bandwidth_mb_sec": 0 00:26:52.312 } 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "method": "bdev_iscsi_set_options", 00:26:52.312 "params": { 00:26:52.312 "timeout_sec": 30 00:26:52.312 } 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "method": "bdev_nvme_set_options", 00:26:52.312 "params": { 00:26:52.312 "action_on_timeout": "none", 00:26:52.312 "timeout_us": 0, 00:26:52.312 "timeout_admin_us": 0, 00:26:52.312 "keep_alive_timeout_ms": 10000, 00:26:52.312 "arbitration_burst": 0, 00:26:52.312 "low_priority_weight": 0, 00:26:52.312 "medium_priority_weight": 0, 00:26:52.312 "high_priority_weight": 0, 00:26:52.312 "nvme_adminq_poll_period_us": 10000, 00:26:52.312 "nvme_ioq_poll_period_us": 0, 00:26:52.312 "io_queue_requests": 512, 00:26:52.312 "delay_cmd_submit": true, 00:26:52.312 "transport_retry_count": 4, 00:26:52.312 "bdev_retry_count": 3, 00:26:52.312 "transport_ack_timeout": 0, 00:26:52.312 "ctrlr_loss_timeout_sec": 0, 00:26:52.312 "reconnect_delay_sec": 0, 00:26:52.312 "fast_io_fail_timeout_sec": 0, 00:26:52.312 "disable_auto_failback": false, 00:26:52.312 "generate_uuids": false, 00:26:52.312 "transport_tos": 0, 00:26:52.312 "nvme_error_stat": false, 00:26:52.312 "rdma_srq_size": 0, 00:26:52.312 "io_path_stat": false, 00:26:52.312 "allow_accel_sequence": false, 00:26:52.312 "rdma_max_cq_size": 0, 00:26:52.312 "rdma_cm_event_timeout_ms": 0, 00:26:52.312 "dhchap_digests": [ 00:26:52.312 "sha256", 00:26:52.312 "sha384", 00:26:52.312 "sha512" 00:26:52.312 ], 00:26:52.312 "dhchap_dhgroups": [ 00:26:52.312 "null", 00:26:52.312 "ffdhe2048", 00:26:52.312 "ffdhe3072", 00:26:52.312 "ffdhe4096", 00:26:52.312 "ffdhe6144", 00:26:52.312 "ffdhe8192" 00:26:52.312 ] 00:26:52.312 } 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "method": "bdev_nvme_attach_controller", 00:26:52.312 "params": { 00:26:52.312 "name": "nvme0", 00:26:52.312 "trtype": "TCP", 00:26:52.312 "adrfam": "IPv4", 00:26:52.312 "traddr": "127.0.0.1", 00:26:52.312 "trsvcid": "4420", 00:26:52.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.312 "prchk_reftag": false, 00:26:52.312 "prchk_guard": false, 00:26:52.312 "ctrlr_loss_timeout_sec": 0, 00:26:52.312 "reconnect_delay_sec": 0, 00:26:52.312 "fast_io_fail_timeout_sec": 0, 00:26:52.312 "psk": "key0", 00:26:52.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:52.312 "hdgst": false, 00:26:52.312 "ddgst": false, 00:26:52.312 "multipath": "multipath" 00:26:52.312 } 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "method": "bdev_nvme_set_hotplug", 00:26:52.312 "params": { 00:26:52.312 "period_us": 100000, 00:26:52.312 "enable": false 00:26:52.312 } 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "method": "bdev_wait_for_examine" 00:26:52.312 } 00:26:52.312 ] 00:26:52.312 }, 00:26:52.312 { 00:26:52.312 "subsystem": "nbd", 00:26:52.312 "config": [] 00:26:52.312 } 00:26:52.312 ] 00:26:52.312 }' 00:26:52.312 11:57:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.312 11:57:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.312 11:57:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.312 11:57:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.312 11:57:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:52.571 [2024-11-28 11:57:22.444102] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:52.571 [2024-11-28 11:57:22.444324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102486 ] 00:26:52.571 [2024-11-28 11:57:22.561351] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:52.571 [2024-11-28 11:57:22.584018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.571 [2024-11-28 11:57:22.616568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.831 [2024-11-28 11:57:22.749389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:52.831 [2024-11-28 11:57:22.802599] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:53.400 11:57:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.400 11:57:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:53.400 11:57:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:26:53.400 11:57:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.400 11:57:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:26:53.660 11:57:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:53.660 11:57:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:26:53.660 11:57:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:53.660 11:57:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.660 11:57:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.660 11:57:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.660 11:57:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.919 11:57:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:26:53.919 11:57:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:26:53.919 11:57:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:53.919 11:57:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.919 11:57:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.919 11:57:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.919 11:57:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:54.234 11:57:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:26:54.234 11:57:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:26:54.234 11:57:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:54.234 11:57:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:26:54.504 11:57:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:26:54.504 11:57:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:54.505 11:57:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YS1jM6zIjw /tmp/tmp.gXfjc7vMgj 00:26:54.505 11:57:24 keyring_file -- keyring/file.sh@20 -- # killprocess 102486 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 102486 ']' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 102486 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102486 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.505 killing process with pid 102486 00:26:54.505 Received shutdown signal, test time was about 1.000000 seconds 00:26:54.505 00:26:54.505 Latency(us) 00:26:54.505 [2024-11-28T11:57:24.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.505 [2024-11-28T11:57:24.631Z] =================================================================================================================== 00:26:54.505 [2024-11-28T11:57:24.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102486' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@973 -- # kill 102486 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@978 -- # wait 102486 00:26:54.505 11:57:24 keyring_file -- keyring/file.sh@21 -- # killprocess 102230 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 102230 ']' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 102230 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.505 11:57:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102230 00:26:54.764 11:57:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.764 11:57:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.764 killing process with pid 102230 00:26:54.764 11:57:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102230' 00:26:54.764 11:57:24 keyring_file -- common/autotest_common.sh@973 -- # kill 102230 00:26:54.764 11:57:24 keyring_file -- common/autotest_common.sh@978 -- # wait 102230 00:26:55.023 ************************************ 00:26:55.023 END TEST keyring_file 00:26:55.023 ************************************ 00:26:55.023 00:26:55.023 real 0m14.992s 00:26:55.023 user 0m37.211s 00:26:55.023 sys 0m3.026s 00:26:55.023 11:57:25 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.023 11:57:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:55.281 11:57:25 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:26:55.281 11:57:25 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:55.281 11:57:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:55.281 11:57:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.281 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:26:55.281 ************************************ 00:26:55.281 START TEST keyring_linux 00:26:55.281 ************************************ 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:55.281 Joined session keyring: 826455315 00:26:55.281 * Looking for test storage... 00:26:55.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:55.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.281 --rc genhtml_branch_coverage=1 00:26:55.281 --rc genhtml_function_coverage=1 00:26:55.281 --rc genhtml_legend=1 00:26:55.281 --rc geninfo_all_blocks=1 00:26:55.281 --rc geninfo_unexecuted_blocks=1 00:26:55.281 00:26:55.281 ' 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:55.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.281 --rc genhtml_branch_coverage=1 00:26:55.281 --rc genhtml_function_coverage=1 00:26:55.281 --rc genhtml_legend=1 00:26:55.281 --rc geninfo_all_blocks=1 00:26:55.281 --rc geninfo_unexecuted_blocks=1 00:26:55.281 00:26:55.281 ' 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:55.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.281 --rc genhtml_branch_coverage=1 00:26:55.281 --rc genhtml_function_coverage=1 00:26:55.281 --rc genhtml_legend=1 00:26:55.281 --rc geninfo_all_blocks=1 00:26:55.281 --rc geninfo_unexecuted_blocks=1 00:26:55.281 00:26:55.281 ' 00:26:55.281 11:57:25 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:55.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.281 --rc genhtml_branch_coverage=1 00:26:55.281 --rc genhtml_function_coverage=1 00:26:55.281 --rc genhtml_legend=1 00:26:55.281 --rc geninfo_all_blocks=1 00:26:55.281 --rc geninfo_unexecuted_blocks=1 00:26:55.281 00:26:55.281 ' 00:26:55.281 11:57:25 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:55.281 11:57:25 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f820f793-c892-4aa4-a8a4-5ed3fda41d6c 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.281 11:57:25 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.281 11:57:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.281 11:57:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.281 11:57:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.282 11:57:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.282 11:57:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:55.282 11:57:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:55.282 11:57:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:55.282 11:57:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:55.282 11:57:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:55.541 /tmp/:spdk-test:key0 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:55.541 11:57:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:26:55.541 11:57:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:55.541 /tmp/:spdk-test:key1 00:26:55.541 11:57:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:55.541 11:57:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=102613 00:26:55.541 11:57:25 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:55.541 11:57:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 102613 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 102613 ']' 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.541 11:57:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:55.541 [2024-11-28 11:57:25.554504] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:55.541 [2024-11-28 11:57:25.554784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102613 ] 00:26:55.800 [2024-11-28 11:57:25.681218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:55.800 [2024-11-28 11:57:25.706657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.800 [2024-11-28 11:57:25.747825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.800 [2024-11-28 11:57:25.832520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:56.060 [2024-11-28 11:57:26.066541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.060 null0 00:26:56.060 [2024-11-28 11:57:26.098532] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:56.060 [2024-11-28 11:57:26.098732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:56.060 357381777 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:56.060 439060034 00:26:56.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=102618 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:56.060 11:57:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 102618 /var/tmp/bperf.sock 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 102618 ']' 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.060 11:57:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:56.319 [2024-11-28 11:57:26.184963] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:26:56.319 [2024-11-28 11:57:26.185056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102618 ] 00:26:56.319 [2024-11-28 11:57:26.310268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:56.319 [2024-11-28 11:57:26.339435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.319 [2024-11-28 11:57:26.380764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.319 11:57:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.319 11:57:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:56.320 11:57:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:56.320 11:57:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:56.578 11:57:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:56.578 11:57:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:56.838 [2024-11-28 11:57:26.924543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:57.097 11:57:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:57.097 11:57:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:57.356 [2024-11-28 11:57:27.241070] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:57.356 nvme0n1 00:26:57.356 11:57:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:57.356 11:57:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:57.356 11:57:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:57.356 11:57:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:57.356 11:57:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.356 11:57:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:57.615 11:57:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:57.615 11:57:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:57.615 11:57:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:57.615 11:57:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:57.615 11:57:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:57.616 11:57:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:57.616 11:57:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@25 -- # sn=357381777 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 357381777 == \3\5\7\3\8\1\7\7\7 ]] 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 357381777 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:57.875 11:57:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.134 Running I/O for 1 seconds... 00:26:59.072 14209.00 IOPS, 55.50 MiB/s 00:26:59.072 Latency(us) 00:26:59.072 [2024-11-28T11:57:29.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.072 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:59.072 nvme0n1 : 1.01 14216.44 55.53 0.00 0.00 8960.23 5898.24 15073.28 00:26:59.072 [2024-11-28T11:57:29.198Z] =================================================================================================================== 00:26:59.072 [2024-11-28T11:57:29.198Z] Total : 14216.44 55.53 0.00 0.00 8960.23 5898.24 15073.28 00:26:59.072 { 00:26:59.072 "results": [ 00:26:59.072 { 00:26:59.072 "job": "nvme0n1", 00:26:59.072 "core_mask": "0x2", 00:26:59.072 "workload": "randread", 00:26:59.073 "status": "finished", 00:26:59.073 "queue_depth": 128, 00:26:59.073 "io_size": 4096, 00:26:59.073 "runtime": 1.008551, 00:26:59.073 "iops": 14216.435262074005, 00:26:59.073 "mibps": 55.53295024247658, 00:26:59.073 "io_failed": 0, 00:26:59.073 "io_timeout": 0, 00:26:59.073 "avg_latency_us": 8960.22633206102, 00:26:59.073 "min_latency_us": 5898.24, 00:26:59.073 "max_latency_us": 15073.28 00:26:59.073 } 00:26:59.073 ], 00:26:59.073 "core_count": 1 00:26:59.073 } 00:26:59.073 11:57:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:59.073 11:57:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:59.332 11:57:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:59.332 11:57:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:59.332 11:57:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:59.332 11:57:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:59.332 11:57:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:59.332 11:57:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.591 11:57:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:59.591 11:57:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:59.591 11:57:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:59.591 11:57:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.591 11:57:29 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:59.591 11:57:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:59.851 [2024-11-28 11:57:29.841855] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:59.851 [2024-11-28 11:57:29.842603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355a00 (107): Transport endpoint is not connected 00:26:59.851 [2024-11-28 11:57:29.843541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355a00 (9): Bad file descriptor 00:26:59.851 [2024-11-28 11:57:29.844537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:59.851 [2024-11-28 11:57:29.844646] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:59.851 [2024-11-28 11:57:29.844709] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:59.851 [2024-11-28 11:57:29.844772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:59.851 request: 00:26:59.851 { 00:26:59.851 "name": "nvme0", 00:26:59.851 "trtype": "tcp", 00:26:59.851 "traddr": "127.0.0.1", 00:26:59.851 "adrfam": "ipv4", 00:26:59.851 "trsvcid": "4420", 00:26:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.851 "prchk_reftag": false, 00:26:59.851 "prchk_guard": false, 00:26:59.851 "hdgst": false, 00:26:59.851 "ddgst": false, 00:26:59.851 "psk": ":spdk-test:key1", 00:26:59.851 "allow_unrecognized_csi": false, 00:26:59.851 "method": "bdev_nvme_attach_controller", 00:26:59.851 "req_id": 1 00:26:59.851 } 00:26:59.851 Got JSON-RPC error response 00:26:59.851 response: 00:26:59.851 { 00:26:59.851 "code": -5, 00:26:59.851 "message": "Input/output error" 00:26:59.851 } 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@33 -- # sn=357381777 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 357381777 00:26:59.851 1 links removed 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@33 -- # sn=439060034 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 439060034 00:26:59.851 1 links removed 00:26:59.851 11:57:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 102618 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 102618 ']' 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 102618 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102618 00:26:59.851 killing process with pid 102618 00:26:59.851 Received shutdown signal, test time was about 1.000000 seconds 00:26:59.851 00:26:59.851 Latency(us) 00:26:59.851 [2024-11-28T11:57:29.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.851 [2024-11-28T11:57:29.977Z] =================================================================================================================== 00:26:59.851 [2024-11-28T11:57:29.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102618' 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 102618 00:26:59.851 11:57:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 102618 00:27:00.110 11:57:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 102613 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 102613 ']' 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 102613 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102613 00:27:00.110 killing process with pid 102613 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102613' 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@973 -- # kill 102613 00:27:00.110 11:57:30 keyring_linux -- common/autotest_common.sh@978 -- # wait 102613 00:27:00.680 00:27:00.680 real 0m5.429s 00:27:00.680 user 0m10.294s 00:27:00.680 sys 0m1.642s 00:27:00.680 11:57:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.680 11:57:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:00.680 ************************************ 00:27:00.680 END TEST keyring_linux 00:27:00.680 ************************************ 00:27:00.680 11:57:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:00.680 11:57:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:00.680 11:57:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:00.680 11:57:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:00.680 11:57:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:27:00.680 11:57:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:27:00.680 11:57:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:27:00.680 11:57:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.680 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:00.680 11:57:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:27:00.680 11:57:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:27:00.680 11:57:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:00.680 11:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:02.586 INFO: APP EXITING 00:27:02.586 INFO: killing all VMs 00:27:02.586 INFO: killing vhost app 00:27:02.586 INFO: EXIT DONE 00:27:03.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.522 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:03.522 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:04.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:04.090 Cleaning 00:27:04.090 Removing: /var/run/dpdk/spdk0/config 00:27:04.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:04.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:04.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:04.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:04.090 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:04.349 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:04.349 Removing: /var/run/dpdk/spdk1/config 00:27:04.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:04.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:04.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:04.349 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:04.349 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:04.349 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:04.349 Removing: /var/run/dpdk/spdk2/config 00:27:04.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:04.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:04.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:04.349 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:04.349 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:04.349 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:04.349 Removing: /var/run/dpdk/spdk3/config 00:27:04.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:04.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:04.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:04.349 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:04.349 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:04.349 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:04.349 Removing: /var/run/dpdk/spdk4/config 00:27:04.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:04.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:04.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:04.349 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:04.349 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:04.349 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:04.349 Removing: /dev/shm/nvmf_trace.0 00:27:04.349 Removing: /dev/shm/spdk_tgt_trace.pid70846 00:27:04.349 Removing: /var/run/dpdk/spdk0 00:27:04.349 Removing: /var/run/dpdk/spdk1 00:27:04.349 Removing: /var/run/dpdk/spdk2 00:27:04.349 Removing: /var/run/dpdk/spdk3 00:27:04.349 Removing: /var/run/dpdk/spdk4 00:27:04.349 Removing: /var/run/dpdk/spdk_pid100135 00:27:04.349 Removing: /var/run/dpdk/spdk_pid100295 00:27:04.349 Removing: /var/run/dpdk/spdk_pid100392 00:27:04.349 Removing: /var/run/dpdk/spdk_pid100563 00:27:04.349 Removing: /var/run/dpdk/spdk_pid100673 00:27:04.349 Removing: /var/run/dpdk/spdk_pid101376 00:27:04.350 Removing: /var/run/dpdk/spdk_pid101407 00:27:04.350 Removing: /var/run/dpdk/spdk_pid101442 00:27:04.350 Removing: /var/run/dpdk/spdk_pid101695 00:27:04.350 Removing: /var/run/dpdk/spdk_pid101726 00:27:04.350 Removing: /var/run/dpdk/spdk_pid101761 00:27:04.350 Removing: /var/run/dpdk/spdk_pid102230 00:27:04.350 Removing: /var/run/dpdk/spdk_pid102241 00:27:04.350 Removing: /var/run/dpdk/spdk_pid102486 00:27:04.350 Removing: /var/run/dpdk/spdk_pid102613 00:27:04.350 Removing: /var/run/dpdk/spdk_pid102618 00:27:04.350 Removing: /var/run/dpdk/spdk_pid70693 00:27:04.350 Removing: /var/run/dpdk/spdk_pid70846 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71045 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71131 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71151 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71261 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71271 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71411 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71606 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71760 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71838 00:27:04.350 Removing: /var/run/dpdk/spdk_pid71922 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72014 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72086 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72124 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72160 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72224 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72307 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72756 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72808 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72859 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72875 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72942 00:27:04.350 Removing: /var/run/dpdk/spdk_pid72958 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73025 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73034 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73079 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73090 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73135 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73153 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73289 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73324 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73402 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73734 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73746 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73782 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73796 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73817 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73836 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73849 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73865 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73884 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73903 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73917 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73938 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73951 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73972 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73991 00:27:04.609 Removing: /var/run/dpdk/spdk_pid73999 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74020 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74039 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74058 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74068 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74104 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74123 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74147 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74219 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74253 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74257 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74291 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74295 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74308 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74345 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74364 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74393 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74402 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74412 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74421 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74431 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74440 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74450 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74459 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74492 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74514 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74529 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74552 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74567 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74575 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74615 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74627 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74653 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74661 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74668 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74676 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74683 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74696 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74704 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74711 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74788 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74841 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74959 00:27:04.609 Removing: /var/run/dpdk/spdk_pid74994 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75032 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75052 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75074 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75094 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75120 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75141 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75219 00:27:04.609 Removing: /var/run/dpdk/spdk_pid75235 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75290 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75359 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75421 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75444 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75549 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75593 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75625 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75857 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75949 00:27:04.869 Removing: /var/run/dpdk/spdk_pid75978 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76006 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76041 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76074 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76108 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76145 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76546 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76586 00:27:04.869 Removing: /var/run/dpdk/spdk_pid76931 00:27:04.869 Removing: /var/run/dpdk/spdk_pid77399 00:27:04.869 Removing: /var/run/dpdk/spdk_pid77677 00:27:04.869 Removing: /var/run/dpdk/spdk_pid78540 00:27:04.869 Removing: /var/run/dpdk/spdk_pid79449 00:27:04.869 Removing: /var/run/dpdk/spdk_pid79572 00:27:04.869 Removing: /var/run/dpdk/spdk_pid79639 00:27:04.869 Removing: /var/run/dpdk/spdk_pid81057 00:27:04.869 Removing: /var/run/dpdk/spdk_pid81378 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85132 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85498 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85609 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85736 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85757 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85788 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85816 00:27:04.869 Removing: /var/run/dpdk/spdk_pid85913 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86039 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86207 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86292 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86486 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86562 00:27:04.869 Removing: /var/run/dpdk/spdk_pid86647 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87011 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87433 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87434 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87435 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87698 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87945 00:27:04.869 Removing: /var/run/dpdk/spdk_pid87947 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90252 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90643 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90645 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90967 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90981 00:27:04.869 Removing: /var/run/dpdk/spdk_pid90995 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91026 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91036 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91116 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91129 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91232 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91238 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91342 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91350 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91786 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91829 00:27:04.869 Removing: /var/run/dpdk/spdk_pid91942 00:27:04.869 Removing: /var/run/dpdk/spdk_pid92023 00:27:04.869 Removing: /var/run/dpdk/spdk_pid92371 00:27:04.869 Removing: /var/run/dpdk/spdk_pid92562 00:27:04.869 Removing: /var/run/dpdk/spdk_pid92992 00:27:04.869 Removing: /var/run/dpdk/spdk_pid93528 00:27:04.869 Removing: /var/run/dpdk/spdk_pid94372 00:27:04.869 Removing: /var/run/dpdk/spdk_pid95001 00:27:04.869 Removing: /var/run/dpdk/spdk_pid95009 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97014 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97068 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97115 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97163 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97271 00:27:04.869 Removing: /var/run/dpdk/spdk_pid97324 00:27:05.135 Removing: /var/run/dpdk/spdk_pid97371 00:27:05.135 Removing: /var/run/dpdk/spdk_pid97424 00:27:05.135 Removing: /var/run/dpdk/spdk_pid97782 00:27:05.135 Removing: /var/run/dpdk/spdk_pid99003 00:27:05.135 Removing: /var/run/dpdk/spdk_pid99147 00:27:05.135 Removing: /var/run/dpdk/spdk_pid99377 00:27:05.135 Removing: /var/run/dpdk/spdk_pid99973 00:27:05.135 Clean 00:27:05.135 11:57:35 -- common/autotest_common.sh@1453 -- # return 0 00:27:05.135 11:57:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:27:05.135 11:57:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.135 11:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:05.135 11:57:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:27:05.135 11:57:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.135 11:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:05.135 11:57:35 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:05.135 11:57:35 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:05.135 11:57:35 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:05.135 11:57:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:27:05.135 11:57:35 -- spdk/autotest.sh@398 -- # hostname 00:27:05.135 11:57:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:05.394 geninfo: WARNING: invalid characters removed from testname! 00:27:27.331 11:57:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.624 11:58:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:33.157 11:58:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:35.691 11:58:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.597 11:58:07 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.131 11:58:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:42.688 11:58:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:42.688 11:58:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:42.688 11:58:12 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:42.688 11:58:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:42.688 11:58:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:42.688 11:58:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:42.688 + [[ -n 5939 ]] 00:27:42.688 + sudo kill 5939 00:27:42.736 [Pipeline] } 00:27:42.751 [Pipeline] // timeout 00:27:42.756 [Pipeline] } 00:27:42.769 [Pipeline] // stage 00:27:42.774 [Pipeline] } 00:27:42.799 [Pipeline] // catchError 00:27:42.807 [Pipeline] stage 00:27:42.809 [Pipeline] { (Stop VM) 00:27:42.821 [Pipeline] sh 00:27:43.101 + vagrant halt 00:27:46.389 ==> default: Halting domain... 00:27:51.679 [Pipeline] sh 00:27:51.957 + vagrant destroy -f 00:27:55.243 ==> default: Removing domain... 00:27:55.255 [Pipeline] sh 00:27:55.535 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:27:55.545 [Pipeline] } 00:27:55.561 [Pipeline] // stage 00:27:55.567 [Pipeline] } 00:27:55.582 [Pipeline] // dir 00:27:55.587 [Pipeline] } 00:27:55.602 [Pipeline] // wrap 00:27:55.609 [Pipeline] } 00:27:55.621 [Pipeline] // catchError 00:27:55.631 [Pipeline] stage 00:27:55.633 [Pipeline] { (Epilogue) 00:27:55.646 [Pipeline] sh 00:27:55.929 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:01.217 [Pipeline] catchError 00:28:01.219 [Pipeline] { 00:28:01.234 [Pipeline] sh 00:28:01.517 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:01.517 Artifacts sizes are good 00:28:01.526 [Pipeline] } 00:28:01.542 [Pipeline] // catchError 00:28:01.555 [Pipeline] archiveArtifacts 00:28:01.564 Archiving artifacts 00:28:01.719 [Pipeline] cleanWs 00:28:01.733 [WS-CLEANUP] Deleting project workspace... 00:28:01.733 [WS-CLEANUP] Deferred wipeout is used... 00:28:01.751 [WS-CLEANUP] done 00:28:01.753 [Pipeline] } 00:28:01.769 [Pipeline] // stage 00:28:01.774 [Pipeline] } 00:28:01.788 [Pipeline] // node 00:28:01.794 [Pipeline] End of Pipeline 00:28:01.833 Finished: SUCCESS